Connectomestats FWE correction

Hi All,

I am trying to compare connectome stats between two groups of subjects and want to make sure I’m not misunderstanding or trying to do something incorrectly. How is the FWE correction separate from the permutation testing and why are both being done? I understand that the FWE corrected values are in comparison to the ranked values in the distribution after permutation, but I don’t really understand what the uncorrected p-values are, as they are also dependent on the permutation test.

I would like to include this analysis in a paper that also includes related metrics, for which I used FDR as my multiple comparisons correction method and I would like to have the methods be the same for each of my metrics. Would it make sense to use the uncorrected p-values after some reasonable number of permutations and then do FDR correction or what would you recommend?

I realize I am lacking in some more fundamental understanding of the method, so any help is very much appreciated! Thank you in advance!

Welcome Shana!

Anderson Winkler’s 2014 manuscript is the reference document for everything permutation testing. It’s pretty dense, but the concepts are described there very precisely if you can work your way through it.

It’s important to recognize that the statistical inference in connectomestats is, in fact, almost identical to that in fixelcfestats and mrclusterstats. The only difference is in the “statistical enhancement” step: that is, how observation of an effect in one element being tested (whether a connectome edge, or voxel, or fixel) enhances the belief in other “related” elements where an effect was also observed. For fixels, this is CFE; for voxels, this is (by default) TFCE; for connectomes, this is (by default) TFNBS (currently called “NBSE” in the software, but it’s the same method, and the name will change soon). Since your question doesn’t actually include anything specific to TFNBS, I can explain what’s going on in using more general language, ii.e. not specifically tied to connectome stats.

How is the FWE correction separate from the permutation testing and why are both being done?

These are not independent details. Permutation testing is a component of what permits FWE correction. Permutation testing provides data to form the null distribution; the fact that the maximal enhanced statistic of each permutation is what goes into the null distribution is what provides multiple comparison correction with FWE control.

I don’t really understand what the uncorrected p-values are, as they are also dependent on the permutation test.

These are literally: For each “element” (connectome edge in your case), for what fraction of the permutations was the enhanced statistic greater than that observed for the default permutation (i.e. when no shuffling occurs; the actual labeling of the data). So this is a direct non-parametric implementation of p-value (“likelihood of this occurring by chance”) that is performed independently for each element tested, and therefore does not incorporate any form of multiple comparison correction.

The uncorrected p-value images are likely not actually of any use in almost all scenarios; they are generated because “they can be”.

I would like to include this analysis in a paper that also includes related metrics, for which I used FDR as my multiple comparisons correction method and I would like to have the methods be the same for each of my metrics.

Either:

  1. These additional metrics of which you speak are defined at the connectome edge level, in which case there is likely no justification by which to use different statistical inference methods for different metrics, i.e. you could either run those other metrics through connectomestats, or you could run the data pertaining to this thread through whatever analysis you used previously for the other metrics;

  2. These additional metrics are not defined at the connectome edge level, in which case:

    1. If they are defined at the connectome node level (and if your parcellation doesn’t have too many nodes), this is actually the use case that led to creation of the vectorstats command. So you could provide (weak) FWE control for those metrics if you desired.

    2. If they are defined in a wholly different domain, e.g. global network metrics, then if I were a reviewer I wouldn’t have an issue with such metrics being reported using FDR but TFNBS being utilized with FWE correction.

      (P.S. That’s not an invitation to nominate me. I Am Reviewer #2. You Have Been Warned. :smiling_imp:)

Would it make sense to use the uncorrected p-values after some reasonable number of permutations and then do FDR correction

Depends on exactly what kind of FDR correction procedure you’re proposing; it’s a bit of an umbrella term for a lot of different methods. But if you’re not a statistics expert, I would probably advise against trying to do statistical inference differently to how the developers of statistical inference methods in the field do it.

Cheers
Rob

Thank you so much that helped immensely with my understanding of what is going on!

I do have one other question. I am trying to relate the connectome stats to a gray matter analysis using the a2009s FreeSurfer parcellation (which is what I also used for the creation of the connectomes). Is there a way to run connectomestats, but constrain it to only the nodes in which I saw gray matter differences between my groups? (i.e. I would like to identify whether differences that I have identified in gray matter regions translate to differences in white matter connections between those regions) Or should I go back a step and only use the subset of nodes when I create the connectomes in the first place?

Is there a way to run connectomestats, but constrain it to only the nodes in which I saw gray matter differences between my groups?

Unfortunately no; only because I’ve not yet gotten around to implementing it. Rather than constructing the connectomes using a subset of nodes, build the connectomes using all nodes, and then just select from the matrix data those elements in which you are interested.

However if you are only interested in the white matter connections between the grey matter regions of interest, then I don’t think you should actually be using connectomestats: the statistical enhancement algorithm that is the unique component of that command isn’t really applicable to such data. I don’t think a network-based connected-component analysis makes much sense if you are only dealing with a small subset of nodes. You may well be better off, e.g. if you have 4 GM regions of interest, extracting from your connectome matrices the 6 unique edges between those nodes, and feeding them to vectorstats, which will provide FWE control but treat the individual edge values entirely independently.

Ah thank you! I appreciate your help on this!

Dear Rob,

I was lucky to find this post as I had a very similar problem. I appreciate your answers and would like to ask some questions to make sure I’m understanding it correctly.
After we executed “connectomestats” we had a stack of results. From my limited statistical experience, fwe_1mpvalue, uncorrected_pvalue, and tvalue appealed to me first.
At first, I thought the uncorrected_pvalue file contained the original p-value and the fwe_1mpvalue contained the fwe-corrected p-value. But to my surprise, the p-values in the uncorrected_pvalue file are all greater than 0.05, while the fwe_1mpvalue file has some p-values less than 0.05.

  1. From your answers here, uncorrected_pvalue means that the probability of shuffled data statistically greater than the no shuffle data. So our results are reasonable, right?

  2. As far as I know, t-values are usually larger when p-values are smaller. However, in my results, I observe that the t-value increases as the p-value increases. Am I misinterpreting the results? What do these t-values represent?

  3. Do we have documentation to explain the output file? These are a bit complicated, especially for a layman in permutation statistics.

Any help is appreciated.

Best,
Ziqian

Hi Ziqian,

Firstly I’ll clarify one potential source of confusion that I’ve spotted due to your question, regardless of the extent to which it may be a stumbling block for you.

For all versions of MRtrix3, the p-value outputs of statistical inference commands have always been (1-p); that is, applying a lower threshold of 0.95 during visualisation is equivalent to locating those elements where p<0.05. In an attempt to better communicate this, with the 3.0.0 update I changed the file name of the FWE-corrected data output by statistical inference to be “fwe_1mpvalue” rather than “fwe_pvalue”. However I have now realised that I failed to make the corresponding change to the uncorrected p-value data, which is also encoded as (1-p).

So in both the cases of corrected and uncorrected p-values (the latter of which is most typically ignored entirely in such experiments), the contents of these files should be interpreted as containing (1-p). I think that at least explains your point 2.

For point 1: The value stored within the file containing the uncorrected p-value for each “element” tested (connectome edges in this case) is simply the fraction of shuffles for which the enhanced statistic in the default permutation (i.e. original data labelling, no shuffling) is greater than that of the shuffled data.

Point 3: No, but that’s a good idea, particularly since with the statistical inference enhancements in 3.0.0 there are situations where there can be even more output files generated with increasingly complex names / interpretation. I’ve added a GitHub issue requesting that such documentation be written.

Cheers
Rob

1 Like