CFE Multiple/Partial Correlation

Hello all,
I’d like to run some multiple and partial correlation analyses using CFE. I have a single group with 3 partially correlated, continuous traitlike variables for each subject. I am not an expert in contrast matrices, so I wanted to see if I could get some help constructing the appropriate contrast matrices. Assuming I have a design matrix that is a column of 1’s, and then 3 columns (one for each continuous variable):

  1. What would be the correct contrast matrix for a full multiple regression? Would it just be [0 1 1 1]?
  2. What about partial correlations? If I’m interested in variable 1, is it just [0 1 0 0]?
  3. Do I need to de-mean the variables in advance?
  4. Would the negative option here be the equivalent of making any 1’s negative? (Test for negative correlation)

Thanks for any help you can provide!

Hi John,

While I feel as though my competence with the GLM is improving, I actually don’t have much ‘conventional’ stats experience, so I often run into issues trying to translate between what users are trying to communicate, and a numerical expression of the model & hypothesis.

In this case, I’m stuck on “full multiple regression”. What I suspect this phrase is supposed to be referring to is that all variables of interest are being regressed against the data simultaneously within a single model. If this is the case, then it’s the inclusion of all three variables within your design matrix that is providing the “full multiple regression”. The problem then is that this provides no information about your hypothesis regarding how the data may fit this model.

If by “partial correlations” you mean how your measurement (e.g. FD / FC / FDC within FBA) varies as a function of a particular variable, specifically within a model that in fact regresses against all variables at once, then yes: [0 1 0 0] will give you the rate of change of measurement variable as a function of explanatory variable 1; similarly [0 0 1 0] for variable 2 and [0 0 0 1] for variable 3.

Using the -negative option allows you to test for both positive values of (rate of change of measurement variable as a function of explanatory variable), and negative values of that rate-of-change, within a single execution of fixelcfestats. This is equivalent to e.g. running it once with contrast [0 1 0 0], and then again with [0 -1 0 0]; but doing it in a single execution is faster, as you only need to build the fixel-fixel connectivity matrix once. You however can’t yet test multiple distinct hypotheses within a single fixelcfestats run; but that functionality is on its way.

Whether or not to de-mean variables in the design matrix can be context-dependent. Here, whether or not you de-mean influences the extent to which you can interpret directly the beta values that the GLM yields. However I might dodge that discussion for brevity this time around. The fundamental outcome of your experiment should not change depending on whether or not you de-mean; unless your explanatory variables vary wildly in magnitude (e.g. one variable has values of ~ 1e-6, another has values of ~ 1e+6), in which case demeaning the variables (& modulating to unity variance) may provide something akin to preconditioning for the GLM.


P.S. If, with Point 1, you were in fact referring to a hypothesis more akin to “Do any of these variables influence the observed measurement?”, then in GLM-speak this would be an F-test with matrix contrast. This is another capability that is not yet available in the public code but has been implemented and is on its way.

Rob

1 Like

Hi Rob,
Thanks for the helpful reply. Your readings of my intended models are right. In this case, the 3 variables of interest are correlated (and conceptually related). The idea would be that (1) would be a sort of omnibus test, and subsequent tests would look for any variance in FD/FC/FDC associated with each individual measure (while controlling for the others). So, for (1), we want to be testing the hypothesis that a fixel-stat covaries with each metric in the same direction. In that case, does the [0 1 1 1] contrast make sense?

John

So, for (1), we want to be testing the hypothesis that a fixel-stat covaries with each metric in the same direction . In that case, does the [0 1 1 1] contrast make sense?

Again, it’s slightly ambiguous translating between your description and what actually happens within the GLM. I’ll try to describe what this would actually do, and you can comment on whether this is in fact what you’re looking for, or if not, where the difference lies.

Let’s say you have three covariates: A, B, and C; and you’re going to provide fixel-wise FD measures as your input. By defining the contrast [ 0 1 1 1 ], your hypothesis is:

H1: dFD/dA + dFD/dB + dFD/dC > 0

That is, the sum of rates of change of FD with respect to the three measures is expected to be zero when the data are permuted, but greater than zero in the non-permuted data.

So there’s a few things to be aware of here:

  • This doesn’t require that all three relationships must be positive, only that the sum of the three is possible.

  • If FD is strongly negatively correlated with one of the three variables, that may effectively cancel out the contribution from the other two.

  • It assumes that the addition of those rates of change makes sense, even though various nuisance regressors quite often come with different units and/or magnitudes.

  • A strong negative association will not be identified by this test.

Given your description of this as seeking an “omnibus” test, I suspect that an F-test would be more faithful to what you’re actually trying to achieve, even though negative correlations would contribute to the statistic.

3 Likes

Thanks, Rob. This is very helpful. I think you’re right that the F-test you describe is more faithful to what I’m going for. Any idea when that might be implemented? I’d be curious to hear about any other statistical developments for FBA as well.

John