Dear MRtrix experts,
we are currently working on tractography to small subcortial subnuclei. For example in the thalamus, some nuclei are quite small (only some mm) and are directly attached to each other.
As expected, we found that if we change the
tck2connectome -assignment_radial_search ‘radius’
we change the number of streamlines assigned to those nodes.
If we work with a liberal radius of 2 or 1 mm, which provides good results for our cortical fiber count, the subcortical fiber count of the small nuclei seems to be a bit high. We guess that especially small nuclei will ‘benefit’ from the search radius around them.
If we reduce the search radius to 0.5 mm, we get a huge decrease of fiber count for both the cortical and subcortical elements. For the cortex, 0.5 mm may be too conservative because of the 5TT definition of streamline ending points, but we guess, for the small subcortical elements, this radius should be more reliable, since the nuclei generelly are quite small (and the 5TT file definition should be less a problem in our opinion, see below).
The question to you is - despite of the idea of using more advanced ACT techniques like ‘Mash based ACT’, what is your experience with small nuclei.
Second, in my understanding of ACT, the fibers into the tissue with the definition of subcortical gray matter in the 5TT file, will go ‘into’ and end ‘within’ the tissue and not end at the border of the area like at the cortex. For that reason, it should be possible to extract the fiber count to any subdivision of subcortical regions by simply further subdividing the ‘input node parcellation’ file for the tck2connectome command and consequently extract the fiber count to the subdivisions only. Some confirmation would be apprechiated :).
Best wishes, Tom
I’d be hesitant about basing conclusions about the streamline counts “seeming” high or low. There’s some more objective ways in which this can be both conceptualised and analysed. Ultimately though I probably should not have taken so long to supersede the radial search mechanism; it was only ever intended to be a stop-gap solution…
Imagine that the maximal search radius is infinity. Every streamline termination is assigned to the nearest parcel, regardless of distance. Now start progressively decreasing the maximal permissible distance. For any individual parcel, this will have zero effect until you cross the threshold of the streamline most distant from that parcel that is nevertheless assigned to that parcel. As the distance threshold decreases further, more and more terminations that were originally assigned to that parcel will now not be assigned to any parcel.
Ideally, at some specific distance, there would be a “plateau” of stable results, where streamlines that could erroneously be assigned to the parcel (e.g. due to terminating in an erroneous tissue classification nearby the parcel) are omitted, but all streamlines that “should” be assigned to the parcel still are, and that result is stable within some range of distance thresholds. The trouble is that this “ideal” distance threshold may 1) not exist; 2) be different for different parcels. Nevertheless, the thought experiment at least demonstrates that looking at raw counts at a handful of thresholds doesn’t show the full story.
If you’re dealing with adult data, I wouldn’t call 1-2mm a “liberal” radius. When code was first made available, the default radius was 2mm, as this seemed to do reasonably for my test data; but I received feedback that it excluded from assignment reasonable streamline thresholds too regularly. So in
3.0.0 I increased the default radius to 4mm. It does mean that in some cases streamlines will be assigned despite terminating in some other nearby structure (whether a true structure that appears in the tissue segmentation but for which a parcel is not defined, or an erroneous tissue segmentation), but it seems to have been necessary to preclude false negatives.
The search radius is based on the Euclidean distance between the streamline termination and the centre of the voxel. So if your parcellation image is 1mm isotropic, and you set a maximal distance of 0.5mm, then for a streamline that terminates in a corner between 8 voxels, it will be impossible for that termination to be assigned to any parcel, as all of the nearest voxels are at a distance greater than 0.5mm. So it’s unsurprising that you’d start to see a huge decrease in streamline count.
I’ve thought a little bit over the years about how to deal with smaller nuclei, but not to the extent of explicitly doing something about it. As nuclei become smaller, representation as a binary mask on a voxel grid becomes more and more problematic, with the “effective” shape within the digital representation diverging from the true structure shape. One possible trick would be to do something like what I do in the
5ttgen hsvs algorithm: convert from a volume to surface representation, potentially apply some surface-based smoothing, then map to a higher-resolution voxel grid.
It’s also important to recognise the distinction between an individual small nuclei that has no other parcellated structures adjacent to it, versus a small sub-nuclei within a larger structure. For the former, if it’s possible for streamlines to terminate near-but-not-inside the nuclei, then the maximal search radius will influence what does or does not get assigned to it; whereas for the latter that parameter should have no effect (as long as it’s not erroneously low as per above).
Second, in my understanding of ACT, the fibers into the tissue with the definition of subcortical gray matter in the 5TT file, will go ‘into’ and end ‘within’ the tissue and not end at the border of the area like at the cortex.
For that reason, it should be possible to extract the fiber count to any subdivision of subcortical regions by simply further subdividing the ‘input node parcellation’ file for the tck2connectome command and consequently extract the fiber count to the subdivisions only.
Yes, all the way down to giving each voxel a unique integer identifier.
thank you very much for that clarifying response!
The fact that streamlines are assigned to the next parcel regardless of the radial search distance (if the radius is not “pathological” low ) was missing in my understanding!
Your explanation of radial search distance of 0.5mm being to low for our voxel size, which would decrease the fiber count allover the brain, is exactly what we can find in our data. We found that there is no single effect on the subnuclei here as you predicted. We will just go with the standard 4mm.
The idea of creating a surface representation for the subcortical nuclei, too, is very inspiring.
Furthermore, thank you for clarifying, that our idea of subcortical fibre termination in general is right.
I have only one question left, that I still cannot fully answer:
If we perform our “normal” subcortical segmentation with the standart pipeline, and look at the structural connectivity results of a “whole” subcortical nucleus defined by a standard parcellation file, we have much lower fiber count values to this nucleus than if we summarize the fiber count after dividing this nucleus in smaller parts (via further sub-dividing the ‘input node parcellation’ file provided to tck2connectome). In our understanding, the count should be approximately the same.
Regardless of radial search distance, the fiber terminations to inner subcortical subnuclei (within other surrounding subnuclei) should be assigned to the nearest subnucleus by tck2connectome. The outer subnuclei adjacent to the white matter may have a little different values dependent of the radial search distance. But overall - there should be no big difference in fiber count between not-subdividing the nucleus versus subdividing the whole nucleus and summarizing those values.
At the moment I decided only to compare divided subcortical subnuclei with other devided subcortical subnuclei, because this behaviour of higher vaues in divided compared to non-devided nuclei, is actually not clear to me.
If we perform our “normal” subcortical segmentation with the standart pipeline, and look at the structural connectivity results of a “whole” subcortical nucleus defined by a standard parcellation file, we have much lower fiber count values to this nucleus than if we summarize the fiber count after dividing this nucleus in smaller parts … there should be no big difference in fiber count between not-subdividing the nucleus versus subdividing the whole nucleus and summarizing those values.
Correct. However given there are steps occurring within here that are outside of my control, and I don’t have any particularly strong hypothesis regarding what might plausibly lead to such an observation, I can only advise to look at your data very closely. For instance, you could use
connectome2tck to extract those streamlines assigned to particular parcels, extract their endpoints using
tckresample, examine the locations of those streamline endpoints that were assigned to one of the sub-divisions of that structure but not to that structure when no such sub-division was performed, and try to determine the origin of that discrepancy.