I am planning to do a Fixel-based analysis: I have multishell-data and used the msmt-csd algorithm to estimate response functions at the group level.

However:
a) I have a relatively low number of subjects (40)
b) I do not plan to compare groups but to correlate continuous variables (e.g., age, cognitive performance) with fixels across the entire sample.

I was wondering if a lack of statistical power is a likely prospect in this situation. Of course, it is always something to have in mind when dealing with small samples, but some methods are more demanding to reach sensitivity. Do you have any insights about what would be realistic requirements to achieve satisfactory statistical power for FBA? For example, how would the sensitivity of FBA compare with that of a VBM analysis on FA skeleton (TBSS)? From what I understood, the statistical framework is quite similar. Of course the information tested is much more precise in the case of FBA.

Personally I do not see the number of subjects to be a problem. Statistical significance has been shown in group comparisons with smaller numbers.

Ultimately group comparisons and correlations with continuous variables are not actually treated any differently within the internals of the GLM. A design matrix column representing group labels is a continuous variable, it just happens that the input data provided only ever present values of -1 or 1.

Indeed, I believe I can present a contrary case to your hypothesis. Imagine an experiment where there is a very strong correlation between two continuous parameters, and little data variance. When you regress your model against the data, the residuals are exceptionally small. Now take one of those two parameters, find the median, assign the value of -1 to those subjects below the median and +1 to those above, and repeat the regression. You now have a “group comparison” where the resulting regression will possess quite a lot more variance than it did prior to the discretisation. Your statistical power is a function of many things, including model residual error (it’s right there in the denominator); so the better your model can explain the variance within your input data, the greater your power to discern mild yet robust effects.

Do you have any insights about what would be realistic requirements to achieve satisfactory statistical power for FBA?

Seems this is an open question. I’ve not had direct experience with the requisite range of studies to give a heuristic, but there’s no reason why estimates of such couldn’t be derived as they would for any other statistical method.

For example, how would the sensitivity of FBA compare with that of a VBM analysis on FA skeleton (TBSS)? From what I understood, the statistical framework is quite similar. Of course the information tested is much more precise in the case of FBA.

That’s kind of a big can of worms. There are multiple factors at play within the general framework of “statistical inference of neuroimaging data” that are either very different or identical between these two examples; hopefully I’ll get to disentangle such in some future work.

I also wouldn’t criticise FA for its precision: it can in fact be very precise by the strict definition of the term. The issue is instead one of specificity.