Dear Mrtrix experts,
I have a group of patients and I generated the same tract of interest in all subjects. Is there a way to add and average the tracts in tck format? to obtain the mean of the streamlines in a group of patients.
Thank you!
Dear Mrtrix experts,
I have a group of patients and I generated the same tract of interest in all subjects. Is there a way to add and average the tracts in tck format? to obtain the mean of the streamlines in a group of patients.
Thank you!
Hi Josue,
When it comes to ‘adding’ tracks, this is as simple as running tckedit
and providing multiple input track files; they will all be concatenated together into a single output track file.
The harder problem is ‘averaging’ tracks. I refer to this as generating an ‘exemplar’ streamline from a set of streamlines: a single trajectory that approximately follows the centre of the bundle. This was raised quite some time ago in this thread. While there hasn’t been any explicit progress in MRtrix3 here, command connectome2tck
does have the -exemplars
option, which performs this sort of calculation only in the process of isolating streamlines belonging to individual edges of the connectome, which then interfaces with the connectome tool in mrview
. It’s theoretically possible to provide similar functionality for any arbitrary set of input tracks, which I raised briefly in this GitHub issue. If there’s sufficient demand for it, I can add it to the list of requested features. Unfortunately it’s actually a deceptively difficult thing to do for arbitrary track inputs, and naive algorithms for such in my experience tend to produce unexpected results (I’d need to go rummaging through old code I wrote during my PhD to demonstrate it).
Rob
Hi Rob,
Thank you so much for your kind response. Now that Mrtrix have the ability give some more biological meaning to the streamlines with SIFT, in addition to the availability of different connectomes (Parkinson, Depression, etc). It would be great to have something that could give us an “averaged streamline tractogram” similar to what we can get with the “group probabilistic maps” in fsl. When I want to correlate this probabilistic maps to lesions, stimulation fields, targets, etc, the only way to do it is calculating an overlap or colocalization coefficient, which I don’t think is the correct way. It would be nice to correlate the different interventions with actual streamlines, but anyways thanks for taking your time to answer my question.
Regards,
Josue
It would be great to have something that could give us an “averaged streamline tractogram” similar to what we can get with the “group probabilistic maps” in fsl. When I want to correlate this probabilistic maps to lesions, stimulation fields, targets, etc, the only way to do it is calculating an overlap or colocalization coefficient, which I don’t think is the correct way.
I’m not familiar with the FSL capability you’re referring to, but what it sounds like to me is something like performing tracking in individual space, transforming streamlines from subject to template space, and then combining the data across all subjects to produce a density / probability map in template space, which could then conceivably be transformed back into individual subject spaces if desired. This however does not depend on generation of exemplars in any way, which is what I think you refer to here:
the only way to do it is calculating an overlap or colocalization coefficient, which I don’t think is the correct way
So for this bit:
It would be nice to correlate the different interventions with actual streamlines
, it’s not completely clear what you’re trying to achieve, but my instinct is that you want to produce an individual exemplar streamline, so that instead of sampling from a pathway volume, you are instead sampling along a streamline trajectory. If that’s the case, then yes the capability you’d be looking for is being able to produce an exemplar streamline from a set of streamlines corresponding to a pathway of interest. This is technically possible, but it’s not something I’m likely to find the time to pursue in the near future. Correct me if my impression is wrong though.
@rsmith, Thank you for your response. We want to calculate the “fraction of the streamlines” that are intersected by a mask (https://www.ncbi.nlm.nih.gov/pubmed/27335406). We are applying stimulation and we have masks of the electrical fields and we want to infer the activation of the pathways by calculating the fraction of the streamlines that intersect the mask of the electrical fields. We have a group of patients with the pathway of interest, we wanted to obtained a single pathway (tck file) that would represent that group and calculate the activation of the pathway as mentioned above.
Thank you!
Okay, I’m still a little confused as to exactly how you’re trying to frame your experiment; I think it might be an issue of what names we are both using to refer to different types of data. So let’s try to clarify things:
If you have a mask image, and a set of streamlines stored within a track file, then you can calculate the fraction of the streamlines that intersect the mask using tckedit
quite easily.
If you want to combine streamlines from multiple subjects into a single file, that can also be done via tckedit
, by simply providing multiple input track files; tckedit
will simply concatenate all of these data into a single output track file.
If you then wanted to know the fraction of streamlines across your whole group that intersect the mask, you could use the concatenated data from step 2 and apply the method described in step 1. Though the result would be exactly the same as if you had applied the method in step 1 to each subject independently, and then subsequently combined those results.
Where I’m lost is whether or not the experiment you’re wanting to perform has / needs to have anything to do with the exemplar generation method I described & linked. To be clear: This process involves taking a set of streamlines, and producing a solitary streamline that follows a trajectory that is approximately the ‘mean’ of the trajectories of all input streamlines. Now this has uses in certain experimental designs; but to me it seems fundamentally incompatible with point 1, which involves calculating the fraction of streamlines that intersect a mask: If there’s only one streamline (the exemplar), how could one possibly calculate such a fraction?
I think maybe what we need to get back on the same page, is if you clarify exactly what you meant in your original message by “add” and “average” tracks:
If by “add”, you were referring to “combining” / “concatenating” track data from multiple files into a single track file, this is covered in point 2; if not, you’ll need to be more specific in your description.
If by “average”, or “mean of streamlines”, you were in fact referring to something along the lines of point 1, I would discourage such a description. When I think of “averaging tracks”, I think of the exemplar production mechanism described in point 4. If the intended process is in fact performing some calculation that involves the use of track data from multiple subjects, then I would not refer to this as “averaging tracks”. This may be a carry-over from prior use of e.g. FSL tools, where one obtains a visitation map for each subject, and these would logically be “averaged” across subjects; but I think it’s important to not conflate terms that refer to manipulation of visitation maps (which are ultimately image data) with describing manipulation of raw streamline data.
@rsmith
Thank you so much Rob for your explanation.
After reading your articles, I think I have a more clear picture of a potential design and please correct me if I am wrong.
If that is correct, Can I use that “value” to compute correlations, regressions and mean comparisons between patients?.
Thank you in advance
Unless you are using the FOD template to define ROIs that you are then projecting to individual subject space in order to select your pathways of interest, there isn’t actually a need to generate an FOD template for this type of experiment.
It’s not clear to me whether there is a distinction between points 6 and 7, or whether this is just the same step erroneously described twice.
Summing the weights of the streamlines is simply a matter of computing the sum of the values within the track weights file. While there isn’t an MRtrix3 command to do this, the weights file is just a text file containing numerical values, which can be manipulated using any of a wide range of tools.
Ideally you want to be multiplying the sum of streamline weights of each subject by the value of the proportionality coefficient for the reconstruction of that subject, which is provided at the terminal by the tcksift2
command or can be exported with the -out_mu
option.
Comparison of these values between subjects also ideally relies on aspects of quantitative DWI pre-processing that are common with the FBA pipeline; namely intensity normalisation and use of common response functions. While it’s feasible to perform such corrections post hoc, it’s not a theory that I want to be getting into the details of here.
@rsmith
Thank you so much!