Group TDIs and/or VBA

Hi all,

I want to generate an average TDI image for subjects in two groups (from single shell data) and compare them either qualitatively or, better, quantitatively (as in a voxel-based analysis as suggested is possible in Calamente et al. 2010)

For qualitative comparison, I was thinking to simply:

1- perform preprocessing: denoising and unringing; motion and distortion correction; bias field correction; and global intensity normalisation across subjects
2- estimate response function; perform CSD; perform tractography (ACT); perform SIFT1
3- to normalise the tracts, use tcktransform (as detailed in this post), after generating (e.g., in SPM), and correcting (with warpcorrect) the necessary warps.
4- perform tckmap on the resulting normalised tracts (as specified in this post).
5- Average across participants within group and compare output

Couple of questions: if using tcksift2, is there a way to normalise the weights or apply them to images in standard space? Similarly, could/should mu (from sift1 in step 2 above) be applied to normalised TDIs?

For quantitative comparison, I think perhaps a modified version of the single shell FBA pipeline could work:

  1. perform preprocessing: denoising and unringing; motion and distortion correction; bias field
    correction; and global intensity normalisation across subjects
  2. compute an (average) white matter response function
  3. upsample DW images
  4. compute upsampled brain mask images
  5. Fibre Orientation Distribution estimation (spherical deconvolution)
  6. generate a study-specific unbiased FOD template
  7. register all subject FOD images to the FOD template
  8. compute the template mask (intersection of all subject masks in template space)
  9. warp FOD images to template space (with reorientation)
  10. perform whole-brain fibre tractography on all subject FOD images in template space
    using template mask as seed
  11. perform SIFT2 on subject tractograms in template space
  12. generate tract density images (TDI) for each subject (and multiply by the respective mu value?)
  13. average TDIs across individuals per group; perform VBA

Your thoughts on the above two approaches is very much appreciated.

Juan

Hi Juan,

The wording of the possibility of performing VBA on TDI data within the 2010 manuscript has been a cause of regret for all involved.

I would strongly advise reading this manuscript to see why your proposed experiment is in fact an inferior alternative to what we already have with FBA.

Rob

2 Likes

Hi Rob,

Many thanks for your previous answer. I would appreciate if you or anyone else could help me with a couple of further related queries.

I’ve gone ahead with the first pipeline for qualitative evaluation that I outlined in my previous post. However, I am not sure about the output I am getting.

I have normalised my tracts with tcktransform as indicated in this post, which involves:

1- Generating xyz identity images in target space with warpinit
2- Applying inverse warps to these images (in SPM in my case)
3- Correcting and merge the IDwarps with warpcorrect
4- Normalizing tacts with tcktransform.

I then use tckmap to generate the TDI. The output of this, however, has three volumes. I assume I need only pay attention to the first volume? What are the other two volumes (I take it they are related to the identity images from warpinit and waprcorrect)?

Also, I am getting very large values in the TDI (100000+) (after normalising the tracts). FYI, prior to normalisation, I generated 10M streamlines from an ROI and then filtered them down to 8M with sift.

I am thankful for any insights in all this.

Juan

1 Like

That’s unexpected, a TDI should be a single scalar volume. Maybe you supplied the -dec option to tckmap…?

You’d expect to get several thousand streamlines per voxels when running tckmap on a 8M streamline tractogram, so I assume you mean some of the voxels in the TDI have much higher values than expected, or compared to the rest of the brain? This might be due to regions with high compression in the non-linear registration, potentially pushing a lot of streamlines together into the same voxel. A screenshot should really help here…

That sounds like quite a small amount of SIFTing: we’d normally expect a factor or 10 or so downsampling, so maybe SIFTing 100M down to 10M or so. Otherwise there’s not much room for the algorithm to affect the densities very much…