Fiber density direct comparison across groups

Hi Jidan,

Hopefully you caught my blog post a couple of days ago that discussed some of the points you raised. But I’ll address some of them more directly here:

I calculated fiber density through dividing the fiber counts between ROIs by the volume of the ROIs …

This in particular was one of the more focused points of my blog post; if you’re looking for a more meaningful measure of fibre density, this step alone is already diluting that measurement / interpretation.

Most of the publications calculating fiber density use it to generate graph theory based measures. Can I do simple comparions between them instead of these graph measures?

Absolutely. This is very much an intended application of SIFT and related methods: a direct comparison of fibre density of particular pathways. The disadvantage of such an approach is the need for an a priori hypothesis: you can’t really test every edge in the connectome independently for an effect, since the correction for multiple comparisons will destroy your statistical power. You should also note that the inter-subject variance in the quantification of fibre density in individual edges is quite high; see for example the scatterplots I generated in this paper.

Once it’s formally published we will add the NBS2 method to MRtrix3, which provides somewhat of a compromise between individual connection testing and network analysis.

If yes for the 1st question, then is the fiber density i calculated is the correct measure? Although it is normalized with the ROI volume, the way the whole-brain connectome was generated may mess with it. After SIFT, every subject has the same number of whole-brain tracts, this may cause bias due to the volume of the brain. E.G. a subject with 1000 mm3 brain size should not have the same number of tracts with a subject with 10000 mm3. Does it make sense? Does it mean that if I want to compare the fiber density, I need to multiply the whole brain volume?

Yes, this is a topic that crops up repeatedly, and I should have published on a long time ago. I tend to refer to this as connection density normalisation; it’s a bit like inter-subject image intensity normalisation, except that you’re trying to make your measure of connection density comparable between subjects.

This issue requires a little more finesse than simply identifying factors that may influence the quantification of connection density, and multiplying / dividing as seems appropriate. For instance: if you had gross differences in brain volumes, this would already influence the parcellation node volumes, and therefore your initial fibre density estimates if you divide the streamline count in each edge by the node volumes - subsequently scaling these estimates by brain volume could either partially cancel out the intended effect, or amplify the effect such that it itself becomes a systematic bias, rather than compensating for a bias.

Some of the arguments I made in the blog post regarding scaling edge densities by node volumes - particularly, the units of the quantification and hence interpretability of the measure, and the formulation of a hypothesis - are also relevant for mechanisms such as scaling for whole-brain volume. So I would suggest re-reading those in this context.

In general I tend to recommend that people stick with simply fixing the number of streamlines per subject - at least until I publish an alternative :stuck_out_tongue: . It’s already common in the field, it shouldn’t be too hard to get past reviewers, and its limitations can be acknowledged appropriately.

Thanks for your patience!
Rob