Welcome Shana!
Anderson Winkler’s 2014 manuscript is the reference document for everything permutation testing. It’s pretty dense, but the concepts are described there very precisely if you can work your way through it.
It’s important to recognize that the statistical inference in connectomestats
is, in fact, almost identical to that in fixelcfestats
and mrclusterstats
. The only difference is in the “statistical enhancement” step: that is, how observation of an effect in one element being tested (whether a connectome edge, or voxel, or fixel) enhances the belief in other “related” elements where an effect was also observed. For fixels, this is CFE; for voxels, this is (by default) TFCE; for connectomes, this is (by default) TFNBS (currently called “NBSE” in the software, but it’s the same method, and the name will change soon). Since your question doesn’t actually include anything specific to TFNBS, I can explain what’s going on in using more general language, ii.e. not specifically tied to connectome stats.
How is the FWE correction separate from the permutation testing and why are both being done?
These are not independent details. Permutation testing is a component of what permits FWE correction. Permutation testing provides data to form the null distribution; the fact that the maximal enhanced statistic of each permutation is what goes into the null distribution is what provides multiple comparison correction with FWE control.
I don’t really understand what the uncorrected p-values are, as they are also dependent on the permutation test.
These are literally: For each “element” (connectome edge in your case), for what fraction of the permutations was the enhanced statistic greater than that observed for the default permutation (i.e. when no shuffling occurs; the actual labeling of the data). So this is a direct non-parametric implementation of p-value (“likelihood of this occurring by chance”) that is performed independently for each element tested, and therefore does not incorporate any form of multiple comparison correction.
The uncorrected p-value images are likely not actually of any use in almost all scenarios; they are generated because “they can be”.
I would like to include this analysis in a paper that also includes related metrics, for which I used FDR as my multiple comparisons correction method and I would like to have the methods be the same for each of my metrics.
Either:
-
These additional metrics of which you speak are defined at the connectome edge level, in which case there is likely no justification by which to use different statistical inference methods for different metrics, i.e. you could either run those other metrics through connectomestats
, or you could run the data pertaining to this thread through whatever analysis you used previously for the other metrics;
-
These additional metrics are not defined at the connectome edge level, in which case:
-
If they are defined at the connectome node level (and if your parcellation doesn’t have too many nodes), this is actually the use case that led to creation of the vectorstats
command. So you could provide (weak) FWE control for those metrics if you desired.
-
If they are defined in a wholly different domain, e.g. global network metrics, then if I were a reviewer I wouldn’t have an issue with such metrics being reported using FDR but TFNBS being utilized with FWE correction.
(P.S. That’s not an invitation to nominate me. I Am Reviewer #2. You Have Been Warned. )
Would it make sense to use the uncorrected p-values after some reasonable number of permutations and then do FDR correction
Depends on exactly what kind of FDR correction procedure you’re proposing; it’s a bit of an umbrella term for a lot of different methods. But if you’re not a statistics expert, I would probably advise against trying to do statistical inference differently to how the developers of statistical inference methods in the field do it.
Cheers
Rob