Set track numbers and connectomics

Hello MRtrix Community,

Long time listener, first time caller :slight_smile:

I am running a connectomic analysis on a healthy aging cohort, however I wish to apply my current pipeline to a neurodegenerative patient cohort.

Per the suggestion of the documentation, I am producing 10 million tracks with iFOD2 utilizing ACT and dynamic seeding. I then use the SIFT algorithm to filter down to 1 million.

I am assuming that having a set number of tracks across participants will result in elevated track densities in subjects with less white matter voxels due to atrophy or smaller TIV. I am worried that this elevated fiber density could bias my connectomic measures, potentially resulting in stronger connectivity in atrophied areas.

Have you encountered this issue in the past? Do you have any suggestions for getting around this issue while still utilizing the benefits of dynamic seeding and SIFT?

Hi @gabemarx,

It depends on the specific pathology. For instance:

  • If every pathway in the brain were to reduce in density by 50%, but tractography otherwise behaved identically. By using a fixed number of streamlines per subject, your experiment would be completely oblivious to the difference.

  • If only a single pathway were reduced in density, then (hopefully) SIFT will reveal this. However using an identical number of streamlines per subject would (in theory) marginally reduce the magnitude of the effect in that pathway, and yield a marginal increase in density in all other brain pathways. In reality though, for a single affected pathway this effect is unlikely to be detectable due to the poor reproducibility of tractography as a whole.

This issue has in fact been raised many times on the forum, they’re just difficult threads to find as everybody describes the issue in their own unique way. I refer to this as inter-subject connection density normalisation, and it’s comparable to the inter-subject intensity normalisation issues inherent to performing an AFD analysis. These threads also invariably contain me saying something along the lines of “I really need to write that paper…”.

I really need to write that paper…

For most applications, fixing the number of streamlines across subjects is in fact a perfectly reasonable mechanism of inter-subject connection density normalisation. Additionally, some connectomic measures should be invariant under global scaling of all values within the matrix, which renders density normalisation redundant. However quantifying specific connection densities in the presence of significant atrophy is the case where this really breaks down.

It is possible to perform an appropriate scaling of connection densities across subjects; it involves a combination of AFD-like intensity normalisation, and consideration of the SIFT proportionality coefficient. However as I’ve said to other users, one of the goals of my paper is to justify why that scaling is appropriate; my concern is that without that paper in the literature, use of such scaling in an applications paper may not be convincing to reviewers / readers. Indeed dividing all connection densities by TIV may be more passable; whether or not this will have the appropriate effect however depends on whether the atrophy is microscopic or macroscopic, and hence whether it affects TIV.


So, summary: If the atrophy is mild / specific, I’d be tempted to ignore it. If it’s significant, you can consider either dividing by TIV or including it as a nuisance regressor. If you really feel as though your experiment needs the best solution and think you can sufficiently justify it in your manuscript, the solution is out there; but I’d really prefer that you don’t exploit the assistance and try to claim novelty for the mechanism before I get the chance to explain / justify it properly (if it weren’t for forum obligations I’d probably have published it by now!).

Cheers
Rob

Hi Rob,

Appreciate the response and the references. I look forward to the paper to be on the subject.

I am still confused as to why this issue can’t be solved by either setting your track number to be dependent on white matter volume ( for example setting the number of tracks to be 100 X # of WM voxels and then SIFTing down to 10 X # of WM voxels) or seeding the white matter voxels and just setting your SIFT target number to 10% of the original number of tracks. In either case wouldn’t you have a track density that is not biased by the white matter volume?

-GM

In either of those cases, you’d be doing an approximate compensation for white matter volume. You’d be altering either the number of streamlines, or the number of streamline seeds, based on WM volume; but this is not necessarily proportional to track density, which is dependent on the sum of streamline lengths rather than the track count. If two subjects have equivalent WM volumes, and you therefore use the same number of streamlines in those two subjects, this would mask any differences in mean fibre / streamline length and/or differences in macroscopic fibre density between them.

Realistically, doing exactly what you describe here (modulating the number of tracks per subject based on WM volume) is perfectly acceptable in cases where you expect to see large variation in WM volume, but not necessarily variations in macroscopic fibre density. I don’t see any issue with such a correction getting through peer review.