I think we may need to invest some effort in translating between what you’re hoping to achieve and the actual data manipulation steps being performed.
Firstly, when I see the phrase “tract wise statistics”, my immediate interpretation is:
- Define a mask corresponding to those voxels belonging to the tract;
- Within each subject, and for some quantitative measure of interest (e.g. FA), take some statistic (e.g. mean) of the values within that mask, producing a single scalar value;
- Take this data, being one scalar value per subject (for any specific quantitative metric), and perform statistics across subjects.
This is certainly consistent with your stating that you are not doing voxel-wise statistics.
Where your own description deviates from my expectation is when you say that for each of your 50 track files, one for each pathway of interest, you are then performing TDI / TWI individually for each of those track files. In the more typical formulation, these per-bundle track files are merely used in the definition of the binary mask images to be used for sampling of the quantitative values.
What I can’t ascertain is whether or not this is a deliberate choice with a specific purpose, or the origins of your confusion. For something like TDI, I can at least conceptually see how there might be some merit to generating an image per bundle, to see if the distribution of connection density within a particular tract is somehow different across participants; but given as you said you’re not doing voxel-wise statistics, I don’t know how you would hope to actually interrogate such. For streamline-specific measures such as length and curvature, if I wanted to characterise those properties for a specific bundle, and I had a track file corresponding to the streamlines belonging to that bundle, I would just quantify these properties for each streamline within the bundle; I’m not convinced there would be any particular benefit in mapping that information back onto a voxel grid. For data being sampled from an image, e.g. FA / MD / AD / RD, I’m not sure of the origin of any benefit of performing TWI with an extracted bundle. Sure, compared to performing TWI on a whole-brain tractogram, you would potentially be preventing the smoothing of information from streamlines other than those belonging to that bundle; but if you’re again not performing voxel-wise statistics, and just e.g. taking the mean value within the bundle to use in downstream statistics, the purpose for the utilisation of TWI here is not clear.
One more question, I have generated all the WM tracts tck files in MNI space. Since this is a group analysis of 50 subjects, do I have to generate template for each TDI/TWI and register TDI/TWI of each subject to the template before computing the mean values?
This question is confusing me, and I’m not quite sure there the misunderstanding is, or whether it makes my initial response a misunderstanding.
Firstly, if the streamlines are “generated” in MNI space, and it’s those streamlines that are being used to generate TDIs / TWIs, then it’s not clear how it is that you would have TDIs / TWIs for each subject in their own native space that are then in need of being warped to template space. Have you already warped the streamlines from MNI space to individual subject space?
One possibility is that you’ve warped many streamlines from all subjects into template space, done the bundle segmentation there based on data from all subjects, and have then taken from within each bundle only those streamlines belonging to each individual subject and warped those back to their respective individual subject spaces. If that’s the case, then that’s fine, though if true then mentioning that this was done in MNI space would actually be leading to more confusion rather than less.
If this is not the case, and you genuinely have one set of streamlines in MNI space, that you have then projected back to each individual subject space in order to compute TWIs, and now want to project that information back to template space, then this really needs to be more carefully considered in terms of the metrics under investigation. For instance, say you’re computing TDI. You take the same template tractogram, warp it to two individual subjects, compute TDI on each subject’s own voxel grid, and then warp that back to the template. How are differences between subjects going to arise? In my mind, the only differences you will see are effects of finite resolution; variance, basically. Or consider length. If you’re using the exact same trajectories for all subjects, the only way that the streamline lengths could vary between participants, given that they’re the same streamlines, is that the non-linear warp field has stretched / compressed them; in which case analysing the non-linear warp fields would be far more direct for your effect of interest than doing TWI.
Another possibility I can think of is that you’ve warped your subjects’ DWI data to MNI space, and then done subject-specific tractography there. This is not recommended; see this work.
Regardless of which of these are true, if you have established for each subject a transformation to MNI space, then I would think that instead re-performing registration with different modalities separately, and applying different warps to different data modalities in order to get them to their respective templates, would only get in the way of analysis. The purpose of normalisation to a template is to account for differences in brain shapes; if each modality is derived from the same brain, then one would question, if the resulting transformations for different modalities is different, whether that is interesting information or artifact.
If I’m still off the mark, then I’d really need you to very carefully describe the start and end points of your analysis are (e.g. one scalar value per subject / metric / bundle?), what steps you currently have in place, in what order, and where the gaps currently are. Currently I’m having trouble establishing these with confidence, which really limits the precision with which I can advise.