as I am trying to refine my workflow for delineating the Optic Radiation (OR), I am currently experimenting with different options. Initially I calculated Whole Brain Connectomes with 10 million streamlines for each subject and then extracted those connecting the thalamus to occipital regions + refining the result with Inclusion/Exclusion ROIs also used in other papers. Results looked promising, but were not accurate enough, as they could not to explain clinical results.
So my idea was to increase the streamline number to 100 million, which lead to much better representations, but also increased computation time dramatically.
Command used for Wholebrain Tractography: tckgen -act 5tt_nocoreg.mif -backtrack -seed_gmwmi gmwmSeed_nocoreg.mif -nthreads 8 -maxlength 250 -cutoff 0.1 -select 100000000 wmfod_norm.mif tracks_100M_nocoreg.tck
Therefore, I did it the classic way, placing seed rois over the thalamus (extracted from Freesurfer Atlas as used above in connectome2tck) including the LGN and defining the same waypoints used before, in order to produce an adequate number of streamlines in a reasonable amount of time. The output seems to include the desired fibers of the OR, but also a lot of false positives or U Fibers, making it look very messy and hard to interpret.
Command used for classic way: tckgen -act 5tt_nocoreg.mif -backtrack -seed_image thalamus.nii -nthreads 8 -maxlength 250 -cutoff 0.1 -select 10000 wmfod_norm.mif tracks_10k_nocoreg_seedimage.tck -include ROI_ventricles.nii.gz -exclude ROI_sag.nii.gz -exclude ROI_cor.nii.gz
What"s confusing me is that the output of the 100M Connectome extraction looks much cleaner than the one from tckgen; although I do not see a reason why the result would be differ this much as relevant seed regions (thalamus/gmwm_seedboundaries) should be fairly similar and inclusion/exclusion ROIs are identical.
So is there anything I can do to obatin clean results (like the OR extracted from 100M) in a reasonable amount of time?
While one would think there would be some evidence out there regarding differences in outcomes between targeted tracking and editing of a whole-brain tractogram, I’m not having any prior conversations or manuscripts coming to mind that have done this with reasonable objectivity. The contrast between the two strategies is very frequently described from a theoretical perspective, but I’m not sure if it’s an experiment that has been evaluated in detail.
While streamlines tractography is ideally symmetric, in that streamlines generated within a pathway in one direction should be indiscriminable from streamlines generated within the same pathway in the other direction, it’s not guaranteed to be the case. It’s quite common for targeted tracking experiments to be repeated with the seed and target regions exchanged in order to mitigate this. If doing so exposes a clear difference in the resulting tractograms, that’s clear evidence of imperfect symmetry of the tracking algorithm.
(The story can be more complicated again in the case where seeding for whole-brain tractography is done in the WM, since then those streamlines would be traversing in one direction for part of the pathway and the other direction for the other part; but since your whole-brain tractogram was generated through GMWMI seeding, that shouldn’t apply)
One can potentially be deceived in these instances by differences in total streamline count. The way I find most constructive to think about it is as follows. For a given tracking algorithm & set of constraints, there is an infinite set of possible streamlines that satisfy all criteria; any particular experimental configuration simply manifests a finite subset of such, which is hoped to be an adequately representative subset. But if one experimental configuration seems to create “more” streamlines that are somehow undesirable (such that the result is “less clean” overall), it could simply be that there are more streamlines overall produced in that setting, such that the total number of unwanted streamlines is greater but the proportion of such streamlines is in fact no different. Such an overall difference in reconstruction density can be masked visually if looking at the streamlines themselves, because as soon as the pathway becomes visually saturated (no voxels that are not occupied by at least one streamline), further increases in streamline density are basically invisible. Reporting the number of streamlines remaining from the edited tractograms, and contrasting these against the targeted tracking, would be useful.
With your targeted tracking, you are currently performing bidirectional tracking. If all seed points do indeed lie within the thalamus, then this should be relatively inconsequential: if the first unidirectional projection from the seed point leaves the sub-cortical GM and enters the WM, then the second unidirectional projection from the seed point in the opposite direction will not be permitted to do so. If there are however candidate seed points from your seed image that are classified as WM by the 5TT image, then such a streamline could potentially reach the target region when projecting in one direction but then travel to a completely different region of the brain with the opposite projection.
(This could potentially be mitigated by having a different seeding option that necessitates that the seed point lie within sub-cortical GM according to the 5TT image; I might think about how that might fit in with other changes I’ve had on the backburner)
There is scope for minute differences in the behaviour of the two approaches. For instance, in
tckgen, ROIs are tested for every vertex generated by the iFOD2 algorithm, including the multiple samples that are generated per “step”, whereas these intermediate samples are by default discarded prior to writing the output track file. So it’s possible for a ROI to be intersected during
tckgen but be missed by feeding the output of
tckedit. Upsampling beforehand using
tckresample would get close to the same behaviour, though not precisely equivalent.
If this discrepancy is something that for you necessitates further investigation, it’s definitely something that can be done, but it would be necessary to generate hypotheses that can be tested with tailored experiments.
Sorry for a very late response but I’ve just seen this.
The OR is a particularly difficult thing to delineate (even, I’m told, invasively). My own experience is that the whole-brain approach would work in theory, but in practice is impossibly slow for this problem as so few streamlines from whole-brain tractography will delineate the OR with any real accuracy.
May I suggest having a read of my recent manuscript on this topic? A long run of published works now show that even targeted-but-simplistic approaches will get you a lot of the radiation, but with considerable variability and will commonly miss some or all of its anterior aspect. Simple approaches are also very inefficient, like you suggest, and there’s not a simple way to make this substantially faster + more reliably that I’ve come across, except to build something much more sophisticated.
and just to add: Rob’s comment about more-streamlines-appearing-more-messy is worth real consideration.
Visualising two delineations in mrview can give the impression that the tractography with more streamlines is messier when it is not the case. For example, my own work running our OR algorithm on different images gave the impression that tractography was visually messier for the high-resolution DWI. This was simply that more streamlines in total were acquired; the proportion of ‘stray’ streamlines was substantially lower.
If you need to verify whether this is the case for you, the best way I have found is to convert your tractography into a trackmap, and divide that image by the number of streamlines. Then, voxel intensities reflect the proportion of streamlines passing that voxel, essentially controlling for streamline count.