OK, so there is indeed a *t*-test (or *F*-test) performed per fixel. If that’s all it was, you’d still need to figure out how to convert that to *p*-values, taking into account the many multiple comparisons being performed and how independent these tests are.

In ‘classical’ statistics, this might be done by looking at the area under the curve of the probability density function (PDF) for the *t*-value assuming the null hypothesis (no effect), and potentially applying a Bonferoni or False Discovery rate correction to account for the multiple comparisons. But that only works under the specific assumptions of Normality, constant variance, independence of tests, etc, and this translate poorly to the massive multiple comparisons problems with quite a bit of dependence between tests that we typically deal with in neuroimaging (see e.g. the recent controversy regarding the validity of cluster-wise parametric statistics in fMRI)

For these (and many other) reasons, non-parametric permutation testing approaches are increasingly being used instead. This now involves performing the original per-fixel *t*-test, but also a large number of equivalent *t*-tests with random permutations of the data (e.g. random group assignment), the purpose of which is to derive an *empirical* estimate of the PDF of the statistic of interest (*t*-test in this case) under the null hypothesis. So that’s one aspect: yes, there are *t*-tests per fixel, but there’s actually a few thousand of them per fixel, not just one.

The next aspect is that to ensure sufficient control of false positives over *all* the tests being performed, the permutation testing records the *maximum* *t*-value over all the tests (i.e. all fixels) for each permutation, and generates an estimate of the PDF of the maximal *t*-value under the null. That is then used to map the actual *t*-values computed to *p*-values corrected for multiple comparisons, and that will inevitably mean higher (less significant) *p*-values than the uncorrected (per-fixel) version.

The final aspect is to try to recover some statistical power by making use of the assumption that changes along one fixel are likely to correlate with changes along other fixels in the same WM pathway. This is the connectivity-based fixel enhancement (CFE) part, and this makes use of whole-brain tracrography to yield estimates of fixel-fixel connectivity, which can then be used to ‘enhance’ *t*-values using a modified version of the threshold-free cluster enhancement (TFCE) approach proposed by Steve Smith in 2009. With these modifications, for each permutation, the *t*-values are computed (for e.g. random group assignment), enhanced using the adapted TFCE procedure, and the maximal *enhanced* *t*-value is recorded. This then produces the PDF of the maximal *enhanced* *t*-values, from which *p*-values can be computed that are corrected for multiple comparisons, under the assumption that effects occur along pathways (i.e. we expect correlations between strongly ‘connected’ fixels).

So that’s essentially a summary of the statistical procedure used in fixel-based analysis, hopefully that’ll clarify how the different bits fit together.

All the best,

Donald.