[WARNING] Design matrix conditioning is poor (condition number: 428.438); model fitting may be highly influenced by noise

Hi MRtrix3 users:

I followed the steps for FBA [Fibre density and cross-section - Multi-tissue CSD — MRtrix 3.0 documentation]
and reached to step23.

I got the warnings of ‘[WARNING] Design matrix conditioning is poor (condition number: 428.438); model fitting may be highly influenced by noise’.
Here is my design_matrix.txt. the second is sex,the third is age.
my contrast_matrix.txt is 0 1 1. I wonder to know the relationship between age and sex to the FD/FC/FDC.

Should this be worrisome or can be ignored?



屏幕截图 2022-03-07 172935

1 Like

Hi @yichao,

At least part of what is driving the condition number up is the fact that ages are being provided in years without normalisation. This makes it “difficult” for the model to disentangle between the global intercept and the offset based on the mean age of the cohort. Modifying the design matrix by subtracting the mean age from all subjects would most likely reduce the condition number to the point where the warning is no longer generated.

I am however glad that you asked the question, because:

my contrast_matrix.txt is 0 1 1. I wonder to know the relationship between age and sex to the FD/FC/FDC.

You are currently not testing the hypothesis that you intend to test. The command will execute, and it will give you a result, but your interpretation of such will be incorrect. What’s happening currently is that you are quantifying e.g. “rate of change of FDC as a function of sex dummy variable” and “rate of change of FDC as a function of age”, summing them (which makes no sense), and then testing whether that sum is greater than zero.

I can’t tell for certain from your text what the intended hypothesis test is. Most likely, you want to know either:

  • Whether either age or sex has an influence on e.g. FDC in either direction, which requires 4 t-tests (each column tested individually in both directions);

  • Whether it is the case that neither age nor sex has any effect on e.g. FDC, which sounds like an F-test.

Rob

Thank you for your answer. It is very useful.I changed my design_matrix. This warning no longer appears


My English is not very good. Maybe I can express it another way.

I intend to test three questions.
first: I intend to identify significant associations between the different FBA metrics (FD, FC, and FDC) with age. Sex as covariates. My contrast_matrix is 0 0 1. Am I right?

second: I intend to identify significant associations between the different FBA metrics (FD, FC, and FDC) with sex. Age as covariates. My contrast_matrix is 0 1 0. Am I right?

third: I intend to identify significant associations between the different FBA metrics (FD, FC, and FDC) with age and sex. My original contrast_matrix is 0 1 1. I know it is wrong according to what you said. What I should do?
I don’t know if I made it clear this time. Thank you very much.

first: I intend to identify significant associations between the different FBA metrics (FD, FC, and FDC) with age. Sex as covariates. My contrast_matrix is 0 0 1. Am I right?

That test will look for any elements where the rate of change of the image metric under investigation (e.g. FDC) as a function of age is positive; that is, FDC increases with age. If you are additionally interested in elements where the rate of change of FDC as a function of age is negative—that is, FDC decreases with age—then most likely you would want to perform two tests: [0 0 1] and [0 0 -1].

(There is a theoretical alternative, where you instead perform an F-test on just the age variable, but it may unnecessarily complicate the issue, and it’s not what most people are looking for)

second: I intend to identify significant associations between the different FBA metrics (FD, FC, and FDC) with sex. Age as covariates. My contrast_matrix is 0 1 0. Am I right?

Correct, except for the issue regarding sign as per the first response above.

third: I intend to identify significant associations between the different FBA metrics (FD, FC, and FDC) with age and sex.

I unfortunately can’t give a clear answer of what data to provide to the command based on your current explanation. Significant associations “with age and sex” is too ambiguous. There are two likely possibilities:

  1. Looking for an interaction effect between age and sex; that is, the relationship between e.g. FDC and age is different between the two sexes.

  2. Performing an omnibus test, where one looks for any elements where the influence of either age or sex is non-zero. However this approach would typically not be done in parallel to the t-tests in your first two points; indeed in a way the purpose of an omnibus test is specifically to not also do those tests.

Hopefully that provided a basis for explaining if you want one of these or something different.

Cheers
Rob

1 Like

Thank you for your help. I successfully finished the two questions.
As for the third question. I want to express the first case.


Then, what I should do? Thank you very much.

One way to test for an interaction effect is described in the FSL GLM wiki page here. That specifically quantifies the rate of change of e.g. FDC as a function of age for males, quantifies the rate of change of FDC as a function of age for females, and then tests the hypothesis that one is larger than the other.

An alternative way it can be done is to introduce an explicit interaction column into the design matrix, as a product of the two factors; e.g.:

GI   Sex     Age   Sex_x_age
----------------------------
 1    -1  0.3361     -0.3361
 1    -1  0.5270     -0.5270
 1    -1  1.2907     -1.1209
 1    -1  0.3361     -0.3361
 1    -1  0.6225     -0.6225
 1    -1  0.9088     -0.9088
 1     1  0.0497      0.0497
 1    -1  0.5270     -0.5270
 1     1  0.5270      0.5270
 1     1  0.3361      0.3361
 1     1  0.8134      0.8134
 1     1  0.3361      0.3361
 1    -1  1.7679     -1.7679
 1    -1  1.1952     -1.1952
 1     1  0.5270      0.5270
etc.

If the null hypothesis is true, and there is no interaction between sex and age (i.e. they can be treated as independent variables), then the fourth beta coefficient should be zero. This I believe is mathematically equivalent to the example presented in the FSL GLM wiki, just with a transformation of variables (but please if I’m wrong on this someone correct me!) that is more amenable to increasingly complex models.

Rob

But, sometimes, I may got some data like:
GI Sex Age Sex_x_age
1 -1 -0.3361 0.3361
1 1 0.3361 0.3361
Is that ok?

And my contrast_matrix should be :0 0 0 1?

Thank you for your help.
Now
My result :the file of FD/FC/FDC beta3.mif as the figure show. These values are close to 0.
Then it means there is no interaction between sex and age?


And my contrast_matrix is 0 0 0 1. My design_matrix is : GI Sex Age Sex_x_Age.

But, sometimes, I may got some data like:

GI Sex Age Sex_x_age
1 -1 -0.3361 0.3361
1 1 0.3361 0.3361

Is that ok?

If you had only those two input data, then obviously it would be a problem; it’s an under-determined system. But with more data it’s fine. It’s something that would benefit from a good visualisation; I might have to consider producing something for the upcoming workshop…

One way you can think about that term is as follows. You fit a linear relationship between e.g. FDC and sex, and fit a linear relationship between FDC and age. You then say, “OK; if I were to fit an additional relationship, which is a positive correlation between FDC and age for males but negative correlation between FDC and age for females, what would be the magnitude of that relationship? Is it different to zero?”. That, I am positing, is equivalent to saying “here’s the relationship between FDC and age for males; here’s the relationship between FDC and age for females; is the magnitude of these two relationships equivalent?”.

And my contrast_matrix should be :0 0 0 1?

As a t-test, this would specifically test the hypothesis of whether the rate of change of e.g. FDC as a function of age is greater for the sex designated as 1 than it is for the sex designated as -1. If you are interested in testing this hypothesis in both directions, you would need an additional 0 0 0 -1 t-test; if you are only interested in whether or not this relationship is non-zero, you could alternatively feed that row into an F-test.

These values are close to 0. Then it means there is no interaction between sex and age?

It’s hard to tell based on beta coefficients alone, since it depends on the magnitude relative to the errors in the system. The interpretation of those raw beta coefficients is possible, but I’m concerned that my trying to explain such will just cause more confusion rather than less…

Cheers
Rob

Thank you for your help . This has been a great boost to my research.
Now, I have another question. This is the result I made according to FBA. These figures show the negative relationship between FD and age(50-90). The first figure shows the the uncorrected p value,(p>0.95) and the second figure shows the FWE-p-value(p>0.95).Most p values of fixels are not significant after correction.Is this make sense ?

uncorrected-p-value

FWE-p-value

Besides, I performed an analysis of the FBA metrics of the major fiber tracts. Take the Commissure Anterior tract for example. It is significant in uncorrected p value, however, It isn’t significant in FWE p value. According to the the result of the linear regression. I should choose the uncorrect p value to improve my research?

Commissure Anterior(CA) tract

uncorrected p value

FWE p value

linear regression FD vs age.(p=0.014202<0.05)

1 Like

The first figure shows the the uncorrected p value,(p>0.95) and the second figure shows the FWE-p-value(p>0.95).

  1. The screenshots can be potentially misleading due to your utilisation of the sidebar. In the first image, which shows uncorrected p-values, the first entry in the fixel directory listing is the one being displayed, but it is the second entry in that list that is currently selected and therefore has its display properties reflected in the GUI elements. Selecting the same fixel directory as that currently being displayed would have avoided some initial confusion on my part…

  2. Since you are thresholding the image (1.0 - p-value) above 0.95, this is equivalent to thresholding p-value below 0.05.

  3. In the first figure, anything with an uncorrected p-value of 0.05 or less will be black. But all fixels (that participated in statistical inference) are in fact “shown”.

  4. In the second image, you are in fact (seemingly) applying a threshold of 0.95 to the fixel-wise Z-statistics. This I can only assume is an error in interaction with the interface, since it’s an unusual condition to apply. The input fixel data files for determining visibility of fixels, vs. the colour of each visible fixel, are two separate controls.

I should choose the uncorrected p value to improve my research?

If only it were that easy… If using uncorrected p-values constituted “an improvement in research”, nobody would perform FWE correction. Statistical inference is not supposed to be an exercise in manual maximisation of reportable results.

What you can say is that nothing achieved statistical significance at the fixel level at FWE p<0.05, but that post hoc interrogation of the raw data suggests that there may be mild effects in specific bundles, and more targeted hypothesis testing (e.g. restricting analysis to specific tracts, or performing tract-wise rather than fixel-wise inference) would potentially have identified significant associations there. What you can’t do is take the data that you manually interrogated in order to formulate those targeted hypotheses, and use those same data to formally test those hypotheses; that would be so-called “double-dipping”.

So.


Can I understand what you mean? Though it is nothing achieved statistical signficance at the fixel level at FWE p<0.05, this does not mean that it is not significant relationship in tract level. Although the two conclusions are different, they are not contradictory.

@yichao Quoting prior text may be easier & faster than screenshots; see relevant thread.

Regarding significance, this is a pure double-dipping / p-hacking problem, and is not anything specific to MRtrix3.

My presumption is that you commenced your experiment with the hypothesis that one or more fixel-wise metrics have a non-zero association with age and/or sex, and a whole-brain FBA with FWE error correction and an alpha of 0.05 was your analysis technique to test that hypothesis. You perform that analysis, and report whatever was or was not significant. Any interrogation of data over and above that is precisely that: a post hoc interrogation of the data. Any particular post hoc numerical analysis may report p<0.05, but the problem is that the test that achieved significance does not align with your original hypothesis.

If, at the commencement of your experiment, you had stated that your hypothesis was of a non-zero relationship between specifically FD and specifically age specifically in the anterior commissure, then sure, you could report that result as statistically significant. But I’m presuming from your description that that was not the case: you used the non-significant fixel-wise FBA results to construct that hypothesis, and then tested that hypothesis using the same data. The problem here is that for basically any experiment, you can go digging through your data, fine-tuning data and parameter selection in order to produce p<0.05, and then report that as a statistically significant result; but that would be fundamentally misleading.

If these data were to be not significant in a whole-brain analysis, but showed suggestion of an association between FD and age in the anterior commissure, so you then collected and analysed a new dataset for testing that hypothesis and those data were shown to achieve p<0.05, then that’s something you could report as statistically significant.