How to improve the FBA results?

I want to know how to improve FBA (fixel-based analysis) results?
This is the result I made according to these steps.The first graph shows the corrected (1-P) value,and the second graph shows the uncorrected (1-P) value.Most (1-p) values are not significant after correction.Is this normal?

Take FC (Pre vs 12) as example:
The range of the fwe-corrected (1-p) value is 0-0.9905 and the uncorrected (1-p) value’s range is 0-0.9998.
The first graph below shows the streamlines threshold 0.95-0.9905 (fwe-corrected), and the second graph below shows the streamlines threshold 0.95-0.9905 (uncorrected).

1、Why does the first diagram show so few significant streamlines? Is that normal?Is there a way to make the results better?Or is it just that?
2、The third diagram shows whole-brain tracking streamlines (20 million SIFT to 2 million),Does it look right?

Lastly,How to understand the “Connectivity Based Fixel Enhancement and Non-parametric Permutation Testing” in FixelCfestats command? I read this article——“Connectivity-based fixel enhancement: Whole-brain statistical analysis of diffusion MRI measures in the presence of crossing fibres” but I still don’t know what CFE is.Is CFE is a statistical analysis method like"Paired T test"? and, which testing method is used in " non-parametric permutation testing" ?

Your advice and help are greatly appreciated.


Yes, in general, any correction for multiple comparisons (which this is) will increase your p-values (i.e. decrease significance). There’s not much we can do about this, it’s a general statistical result (see e.g. here for details).

As above: yes, this is normal, and without acquiring more data, there isn’t anything simple that can be done to improve these results (assuming all the processing steps have been performed as well as they can). The statistical correction procedures used in CFE are already pretty much as good as we can make them. There are a few things that @rsmith is working on, but I don’t expect they will make an enormous difference to your results, and they certainly won’t make your corrected results as extensive as the uncorrected ones.

It looks plausible, but it’s impossible to tell from a simple snapshot like this – there are far too many streamlines to get a sense of streamline density. A better way to verify is to generate the corresponding TDI (using tckmap) and compare with the WM FOD that was used to generate the streamlines in the first place (e.g. figure 9 in the SIFT paper)

CFE refers to the overall framework for correction for multiple comparisons using permutation testing, with statistical enhancement along white matter pathways based on estimates of connectivity derived using tractography. The framework in general is relatively agnostic to the exact test performed for each fixel, but the current implementation will typically perform a t-test for each fixel (the ability to perform an F-test has also recently been introduced). I’m not sure this answers your question, but I can’t think of a way to explain this simply without referring you to the CFE article – it’s quite an involved framework!

Here’s my understanding of the “fixelcfestats” command: design_matrix and contrast_matrix are designed according to the statistical method that I selected,like two sample paired T-test or two sample unpaired T test.Then each fixels value like FD,FC or FDC on two groups will be compared according to the statistical method that I selected.
Hence,my puzzle is: at what step is CFE implemented?Before I do the statistics I want (like I choose paired T test)? Or later? More succinctly, what role does CFE play in “fixelcfestats” commands?


OK, so there is indeed a t-test (or F-test) performed per fixel. If that’s all it was, you’d still need to figure out how to convert that to p-values, taking into account the many multiple comparisons being performed and how independent these tests are.

In ‘classical’ statistics, this might be done by looking at the area under the curve of the probability density function (PDF) for the t-value assuming the null hypothesis (no effect), and potentially applying a Bonferoni or False Discovery rate correction to account for the multiple comparisons. But that only works under the specific assumptions of Normality, constant variance, independence of tests, etc, and this translate poorly to the massive multiple comparisons problems with quite a bit of dependence between tests that we typically deal with in neuroimaging (see e.g. the recent controversy regarding the validity of cluster-wise parametric statistics in fMRI)

For these (and many other) reasons, non-parametric permutation testing approaches are increasingly being used instead. This now involves performing the original per-fixel t-test, but also a large number of equivalent t-tests with random permutations of the data (e.g. random group assignment), the purpose of which is to derive an empirical estimate of the PDF of the statistic of interest (t-test in this case) under the null hypothesis. So that’s one aspect: yes, there are t-tests per fixel, but there’s actually a few thousand of them per fixel, not just one.

The next aspect is that to ensure sufficient control of false positives over all the tests being performed, the permutation testing records the maximum t-value over all the tests (i.e. all fixels) for each permutation, and generates an estimate of the PDF of the maximal t-value under the null. That is then used to map the actual t-values computed to p-values corrected for multiple comparisons, and that will inevitably mean higher (less significant) p-values than the uncorrected (per-fixel) version.

The final aspect is to try to recover some statistical power by making use of the assumption that changes along one fixel are likely to correlate with changes along other fixels in the same WM pathway. This is the connectivity-based fixel enhancement (CFE) part, and this makes use of whole-brain tracrography to yield estimates of fixel-fixel connectivity, which can then be used to ‘enhance’ t-values using a modified version of the threshold-free cluster enhancement (TFCE) approach proposed by Steve Smith in 2009. With these modifications, for each permutation, the t-values are computed (for e.g. random group assignment), enhanced using the adapted TFCE procedure, and the maximal enhanced t-value is recorded. This then produces the PDF of the maximal enhanced t-values, from which p-values can be computed that are corrected for multiple comparisons, under the assumption that effects occur along pathways (i.e. we expect correlations between strongly ‘connected’ fixels).

So that’s essentially a summary of the statistical procedure used in fixel-based analysis, hopefully that’ll clarify how the different bits fit together.
All the best,