Replicating longitudinal fixel-based analysis approach


#21

Is there anyway to check whether the GLM model worked properly or not?

It’s not a matter of whether or not it “worked properly”; that’s the danger with the GLM, it’ll usually do something, but whether or not that something is “proper” is up to the researcher, not the software.

My intuition says:

  • You’d simply not be exploiting all possible mechanisms for shuffling, and therefore wouldn’t be able to generate as many unique shuffles as you would otherwise, but in most scenarios we don’t process anywhere near the maximum possible number of unique shuffles anyway.

  • If there were to be any bias in the null distribution arising from the incomplete error modelling, it’s probably smaller in magnitude than the biases that result from using the Shuffle-X method in the presence of nuisance regressors (assuming you have any).

Depends on the extent to which you’re willing to bet the integrity of your research on a stranger from the internet :sweat_smile: I won’t be offended if you seek clarification from someone with more GLM experience than myself.

I think I managed to merge the updated branch with my local MRtrix3 repository. Thank you for that.

Just beware that the difference between that branch, and what you were using previously, may not be restricted to only the statistical inference commands: stats_enhancements is based on dev, which includes myriad changes not yet merged to master.

Just a small follow-up question, how can I choose to use sign-flipping instead of permutation testing in fixelcfestats ? Is there an option for that? I couldn’t find anything about it in the documentation.

You won’t find it in the online documentation, because that’s generated automatically from the code, and the code used to generate the documentation doesn’t have this capability. If you check the command’s help page by running the command without any arguments, or use the -help option, that’s the documentation that’s relevant for the particular version of the software you have compiled and are running. Here you’re specifically looking for the -errors option.

Rob


#22

Thank you for the detailed reply.

So what I’m making of this is that, while there is probably some bias present, it is negligible. Am I correct?

Thanks again :+1:


#23

Dear all

I would like to continue on this topic by asking for advice if one has a longitudinal study design that includes more than two time points. We have data with 6 scans (time points) per subject and two groups. As a result, it would be difficult follow the same strategy as in the paper by Genc et al. (2018 NeuroImage), i.e. subtract FD(timepoint1) of FD(timepoint 2) and divide this by the time interval in years, and perform statistical analyses on the difference map.

The preprocessing would be quite similar as discussed in the previous messages, right? First compute a within-subject template based on all datasets of that subject (population_template) and then use those within-subject templates to create population-based between-subject template (population_template). Next, combine both spatial transformations (transformcompose) and apply them to the native space FOD maps (mrregister)? This way all FODs are transformed to the population-based template space, right?

For the statistical analyses, would it be possible to do a repeated-measures ANOVA or mixed model analysis? Alternatively, if that would not be possible, would it be a sound approach to, like fMRI, compute a first level (within-subject) statistical test to identify fixels that exhibit a significant main effect of time for e.g. FD? And then for a second level (group) analysis, perform e.g. a one sample t test to identify those fixels that display a consistent change over time across all subjects of the group? Or a two sample t test to compare for differences over time effect between two groups?

Any ideas/suggestions/help would be great! :slight_smile: Thanks in advance!

Best
Julie


#24

I would like to continue on this topic by asking for advice if one has a longitudinal study design that includes more than two time points.

My suspicion is that you might be the first one to do this using FBA. Hope you’re up for the challenge!

Next, combine both spatial transformations (transformcompose) and apply them to the native space FOD maps (mrregister)? This way all FODs are transformed to the population-based template space, right?

The key here is not so much that FODs get transformed to the population-based template space, but that this occurs with a single interpolation step, and intra-individual alignment is not impacted by the potential variability introduced by registering each scan from that subject to a population template rather than an individual-specific template.

For the statistical analyses, would it be possible to do a repeated-measures ANOVA or mixed model analysis? Alternatively, if that would not be possible, would it be a sound approach to, like fMRI, compute a first level (within-subject) statistical test to identify fixels that exhibit a significant main effect of time for e.g. FD? And then for a second level (group) analysis, perform e.g. a one sample t test to identify those fixels that display a consistent change over time across all subjects of the group? Or a two sample t test to compare for differences over time effect between two groups?

fixelcfestats uses the General Linear Model, just like basically any other neuroimaging software, and the underlying raw fixel data can be manipulated using any arbitrary mathematical manipulations. So if you can determine what the recommended processing pipeline would be for an equivalent experiment using fMRI data, I see no reason why you shouldn’t be able to do exactly the same operations for fixel data; though the incoming software changes may be required for various aspects. I would also suggest looking at the two new commands created as part of those developments and strongly consider their use in the context of your more complex experiment.

As far as two-level analyses go, I’m most certainly not experienced with the details of fMRI data analysis, but I would have presumed that the first-level analysis is not about finding a significant main effect of time (and then looking for consistency of such binary identification of such across subjects), but simply quantifying a parameter of interest from the individual subject data (e.g. rate of change over time; equivalent to simply performing a subtraction between two time points, but generalised to more than two scans), and then taking the map of this parameter for each subject and performing a group-level analysis (e.g. to determine if the group mean is non-zero). I suppose either approach would technically be statistically sound; but the quantity being derived, and hence the hypothesis being tested, would be substantially different between the two.

If you’re looking to e.g. reduce the 6 timepoints per subject to a scalar rate of change over time, this could be done using the -notest option in fixelcfestats; this will provide you with the regression coefficients for your particular intra-individual design matrix, which, if constructed appropriately, one of which should correspond to the rate of change over time.

As far as “what test to do”, I’ll repeat my usual schtick when it comes to the GLM: You need to define what your hypothesis is, with sufficient accuracy that you can then build your design and contrast matrices accordingly. In my own experience it tends to be a lack of specificity in one’s hypothesis that leads to uncertainty as to how to operate the software.

Also shout-out to this document that describes the relationships between common statistical test terminology and linear models.


#25

So what I’m making of this is that, while there is probably some bias present, it is negligible. Am I correct?

Maybe not precisely my intent, but close enough. I’d say that if the bias were likely to be substantial, I’d be more likely to be able to figure out what it is; therefore, the fact that I can’t would imply that if it’s non-zero, it’s likely to be small.


#26

THANK YOU very much for your detailed answer! I will look into it! :slight_smile: