Hi experts,
I’m currently into a connectome project and I’m exploiting the MRtrix3 software to carry out whole brain tractography and create some metrics(FA, MD, etc…) maps.
I would like to know if there is a suitable maximum number of streamlines to set as option for the tckgen with ACT algorithm in order to get a reliable reconstruction of the brain fibres, considering an acceptable trade off between the goodness of the results and the computational burden.
I’ve tried to find this information within some discussions in this forum and also reading many articles regarding this topic in the web, without successful findings. Moreover, I followed the MRtrix3 website tutorial step by step but there is not a specific instruction for this kind of parameter. At the moment I’m evaluating values in the range of 100-200k number of streamlines but, since I’m not sure about that, I was wondering if you could help me to figure out some criteria to set this value properly, besides a simple visual inspection of the results. Alternatively, given that you’re part of the main experts in this field, I would ask you if there’s a kind of ‘rule of thumb’ for this parameter which maybe could be set in a certain range of values that can ensure acceptable output results.
Thank you in advance for you precious suggestions and comments.
Best regards,
Francesco.
Welcome Francesco!
I think the most recent response to this question is this one; while it comes up all the time, I agree it doesn’t appear in your face when doing a search. If your analysis is restricted to defining voxel masks for specific anatomical pathways from which to sample voxel-wise quantitative values, then @Lee_Reid 's manuscript is highly relevant; you may be looking to use editing of a whole-brain tractogram rather than targeted tracking, and not interested in quantitative volume so much as the results of sampling those metrics, but much of the logic presented in that article nevertheless applies.
My own intuition is that 100-200k whole-brain is insufficient even for that kind of quantification, as the number of streamlines within segmented bundles is going to be very small. But I’m also the one who set the precedent for 100m streamlines so might be a little biased…
Hopefully we’ll finally have a more authoritative answer to this pragmatic question in the not too distant future. Until then, my very generalised answer remains the same: devise a test as objective as possible that looks at the variance of your own experimental results as a function of streamline count.
Cheers
Rob
Hi Francesco,
Just adding to @rsmith’s answer.
The good news: yes, there is an optimal number of streamlines to ensure your tractography is sufficiently reproducible: the relationship between reproducibility and streamline-count is roughly exponential, which means there is usually a clear line where collecting more streamlines is a waste of time.
The bad news: the reason that you can’t get clear advice on what is optimal (trading off CPU time and reliability) is because it depends substantially on the images at play, what you’re aiming to do, and how careful you are when you do it.
The manuscript that Rob has referred you to highlights this somewhat. For example, I showed the number of streamlines needed to measure MD was 3x what was needed for FA - but only for one tract and not for some others. Similarly, aiming for reliable tensor metrics of the forceps major required 3x more streamlines than the arcuate fasciculus. If pathology is present, your image contains artefacts/bad tensor fits, or if you do not constrain your tractography well (e.g. no exclusion ROIs), the number can soar. So, for the community to have a ‘solution’ to questions like these will have to mean there is some kind of adaptive tool that calculates the number based on your particular methodology.
If your goal is to measure a metric like FA then, yes, I suggest taking a read of my manuscript (skip over the explanation of binary trackmaps as this doesn’t apply to you). In essence, this works like so:
- Generate some streamlines
- Calculate the std dev of metric across streamlines (FA, MD etc)
- Use simple statistics to estimate how many you need to reduce that variability to an acceptable level (akin to a power-analysis, see Eq 1)
- If you have enough, stop. If not, return to step 1.
If you were using this for an entire network, this would look more like so:
- Generate some streamlines (whole-brain)
- Split into different tracts of interest
- For each tract, calculate the std dev of metric across streamlines (FA, MD etc)
- For each tract, use simple statistics to estimate how many you need to reduce that variability to an acceptable level
- Multiply these numbers by the proportion of whole-brain streamlines that each tract represents. For example, if your corticospinal tract needs 5000 streamlines, and it comprises 10% of your whole-brain tractogram, your whole brain tractogram needs at least 50000 streamlines
- Take the maximum of Step 5.
- If you have enough, stop. If not, return to step 1.
I don’t have code that does this larger procedure but it would not be extremely difficult to code in python or similar. The downside of this approach is that it doesn’t directly optimise for network metrics, if that’s your goal, but it would certainly be more robust than taking an unqualified guess.
Sorry if this sounds like ‘a bit much’ work. I would really like to resist giving a hand-wavey answer as to roughly how many streamlines you need because my own work shows that, frankly, any guess I make is likely to be wrong. However, I will back Rob’s suggestion that 200k is unlikely to be sufficient unless you have an extremely low number of ROIs in your whole-brain segmentation + high-quality low resolution tensor images.
Cheers,
Lee
Dear Rob and Lee,
Thank you very much for your accurate and very exhaustive answer.
I’ll go through every step and evalaute the perfomance differences.
Maybe I’ll come back to you but I hope
Cheers,
Francesco.