Hello experts, I noticed that there is an option in the fixel2voxel script that allows for a measure of complexity to be extracted and assigned as a scalar value at each voxel. I attempted to tweak the fixel analysis pipeline to extract complexity and to perform voxelwise cross-subject inference (using mrclusterstats). Here are the steps I used–I couldn’t find much documentation, so I could be way off-base. If anyone can confirm that this is okay, or point me to a better approach, I’m all ears.
Then I simply ran mrclusterstats using the complexity scalar .mif images as inputs, per the documentation for mrclusterstats. So, is it that simple? Does this method accomplish acceptable cross-subject registration for complexity?
You won’t find any documentation on this sort of thing since it’s generally not the kind of analysis that we would suggest doing, due to the loss of both fibre sensitivity and specificity. Nevertheless, it’s something that can be done using the tools available.
I’ll post a few thoughts since the post hasn’t received any feedback thus far:
The first decision to make is whether data smoothing and statistical analysis will be performed in 3D image space, or using Connectivity-based Fixel Enhancement. The latter is possible by using voxel2fixel to map the voxel scalar value to each fixel in the voxel. Though whether or not the latter is appropriate is another question.
The second decision to make is whether the ‘complexity’ metric should be computed in subject or template space. The value of the ‘complexity’ metric depends only on the relative volumes of different fixels within a voxel. Therefore:
This will be influenced by whether or not you modulate the fibre densities by their corresponding change in fibre cross-sectional area.
This will not be influenced by fixel reorientation (since this reorientation is now done with the fixels themselves, and not on the FODs). So fixelreorientshould be redundant.
If using 3D cluster-based stats, I don’t see any reason why processing should not be performed as would have been done for e.g. FA analysis: Compute the measure of interest in subject space, and warp this to template space (without any type of modulation) to achieve cross-subject correspondence. The difference between this approach, and what it looks like you’re doing currently, is that the re-gridding to template space will involve interpolation of a complexity measure image, rather than re-sampling the FOD image in template space and then computing complexity.
Technically any differences in this re-sampling should disappear due to data smoothing, but I just realised that in your current description, no data smoothing will be applied. Smoothing occurs as part of fixelcfestats due to the computational complexity of getting the fixel connectivities; mrclusterstats on the other hand does not include data smoothing. So you’ll need to handle this step explicitly.
That’s all that’s coming to mind for now. Have fun!
Rob
Thanks so much, Rob, I really appreciate your thoughtful reply.
Can you say more about why comparing complexity across subjects isn’t a recommended analysis? I get that I’m basically not taking advantage of the power of fixels and connectivity-based enhancement, but I am really interested in measuring complexity. Is there a better way to assess the relative degree of crossing fibers across subjects?
Regarding smoothing the data: is mrfilter an appropriate command for accomplishing this step, or is there a better way?
The primary concern is that a group change in a specific tract may be erroneously interpreted as an effect in a crossing-fibre pathway. This has regularly been the case with FA studies in the past. This is due not only to the metric being quantified, but also the use of isotropic statistical enhancement intrinsic to 3D cluster-based statistics.
The most important thing is to interpret any observed statistically significant effect within the limitations of the metric being quantified. As long as those limitations are adhered to, there’s nothing wrong with this type of analysis. Note that this includes reporting effects as “differences in the ‘complexity’ metric” as opposed to a more generic “crossing fibres are more/less complex”, since the latter is open to subjective interpretation; it requires some care in knowing precisely how different changes in the FOD manifest as differences in the quantified metric, and therefore how changes in the metric may have arisen (and this may be ambiguous).
Is there a better way to assess the relative degree of crossing fibers across subjects?
Not that I’m aware of. People have tried to attribute this sort of interpretation to things like GFA in the past.
Regarding smoothing the data: is mrfilter an appropriate command for accomplishing this step, or is there a better way?
"The primary concern is that a group change in a specific tract may be erroneously interpreted as an effect in a crossing-fibre pathway."
I see what you mean–so, consider the following scenario: I have an ROI in which there are significant group differences in FA. In the ROI, these FA differences are more correlated with complexity/count (voxelwise) than they are with FD or FC (fixelwise–ROI analyzed with fixelcfestats + ROI mask).
Does this situation give any leverage to an argument for group differences in fiber organization? Or are there other equally likely interpretations (e.g., free water contribution, noise). If other interpretations are likely, is there a way to follow up on dissecting connectivity in those areas? Any references about this issue would be awesome (I have the 2014 Riffert paper)
"The most important thing is to interpret any observed statistically significant effect within the limitations of the metric being quantified. As long as those limitations are adhered to, there's nothing wrong with this type of analysis. Note that this includes reporting effects as "differences in the 'complexity' metric" as opposed to a more generic "crossing fibres are more/less complex", since the latter is open to subjective interpretation; it requires some care in knowing precisely how different changes in the FOD manifest as differences in the quantified metric, and therefore how changes in the metric may have arisen (and this may be ambiguous)."
Thanks–again, are there methods available for getting more precise about this? E.g., if significant differences in FA are NOT underpinned by fixelwise FD/FC/FDC, but ARE linked to complexity and count across an ROI, how would you (and any others with expertise) interpret such a finding? Again, any references to this effect would be great. I am really excited about this software but I have been somewhat at a loss to explain group differences in FA that are not particularly attributable to FD or FC.
Well, a typical example that could explain such a case would be a difference in crossing fibre angle, without any real difference to the microstructure (intra-axonal volume / density / number of axons).
…and that immediately highlights an important criticism about FA: not only is it very non-specific, it’s also sensitive to effects you probably don’t want to measure or factor in. The angle between two crossing bundles of axons probably has very little to do with how intact those bundles are (i.e. their “integrity”, but that’s an ugly word ).
Sorry, I had missed this bit in your earlier reply. In line with my first answer above, the answer to this one is: yes! This could be anything from crossing angle, to dispersion, etc…
Free water contribution: also yes in theory, but maybe not so likely in practice, as more free water may typically be accompanied by less intra-axonal space in the white matter, so one would typically expect a decrease in FD as well then. However again, things could be more complicated, as intra-axonal space and free water is not the only stuff in a voxel, and some stuff we don’t really get any signal from at all. You could for instance have “pure” demyelination, i.e. without affecting the intra-axonal space and acutely also not affecting axon packing density. Suppose free water takes the place of what was myelin before. T2 at b=0 goes up, but b=3000 barely changes. FA probably goes down, FD probably not. So that’s another hypothetical example compatible with FA change without FD change.
And all of that is again part of why we don’t like DTI and FA all that much.
Thanks, Thijs, this is really helpful. A potentially serious caveat that I should have mentioned here is that I’m working with b=1000 diffusion data (64 directions). I have learned from this forum that, certainly, CSD-based analyses are better than using FA, even at low b-values, but I am concerned about whether a lower b-value might nullify any argument for group differences in complexity (angle, dispersion), and/or fixel count at the voxelwise level.
Although a lower b-value will make it more difficult to detect crossings at fine angles / low-volume fixels / orientation dispersion, as long as there isn’t a bias between groups in the detection of these features, I don’t see that the argument is conceptually problematic (as long as a group average response function is used).
Also bear in mind that because the complexity metric is dependent on a discrete rather than continuous representation of the FOD, the results are intrinsically dependent on the FOD segmentation process, and therefore any instability / noise amplification introduced by it. For instance: Imagine two near-identical voxels, with 2 fibre populations crossing at 45 degrees (and hence it’s difficult to distinguish them). If the segmentation algorithm labels one FOD as a single fixel but the other as two, the complexity metric will jump from 0.0 to 0.5. Hopefully this only contributes to within-group variance, but it’s difficult to envisage all possible situations.
As Rob reasoned too, in principle this is not a problem for CSD: even though the data has lower contrast, the response function should represent this too. Deconvolution acts in a way to normalise for this. However, it’s harder at low b-value because you’re essentially trying to sharpen a smoother contrast more, to get the same sharp FOD. So expect more variance: noisier FODs, maybe spurious peaks (with consequences for that complexity metric, as Rob alluded to).
As to FD at lower b-values, that also becomes slightly more problematic, as with lower b-value, there will still be more signal left from, e.g., free water. So as the b-value goes down, gradually the interpretation of FD breaks down. To relate this to my earlier example of, e.g., free water replacing myelin in a certain scenario: this may in fact in theory even cause FD to go slightly up. However, even for b=1000, I expect such effects (in the case of my free water example) to still be quite subtle. I don’t see them “overpower” any “real” differences in FD.
So long story short: I mostly expect you to potentially have less power due to possible increased variance, and more difficulties to correctly deal with the structure of FODs (while segmenting them or trying to match subsequent fixels in case of a fixel based analysis).
Thank you both! I am using a group average response function, so I think noise shouldn’t be a problem at the between-groups level.
In hopes of adjusting (at least somewhat) for free water, I am currently redoing my analyses using the dhollander option in dwi2response (as suggested in another post of yours). Ah, clinical data. I’ll reply here if it makes any difference in my results!
Happy to hear any other suggestions for due diligence, interpretation, etc. regarding the low b-value data.
Yep, that should indeed help to some extent; as long as it’s of course followed up by dwi2fod msmt_csd provided with the WM and CSF responses (and ditching the GM one for low b-value single-shell data for now).