FBA analysis pipeline help

fba

#41

but i get into trouble as follows … The step of statistical analysis of FD does not seem to have been completed (I’ve been waiting more than two days) … Do you have any ideas of what could be wrong?

Throwing an alternative possibility into the mix based on my own experience:

Generally if the system runs out of memory due to the size of the fixel-fixel connectivity matrix, this will occur while the terminal still displays:

fixelcfestats: [ xx%] pre-computing fixel-fixel connectivity

In your case, this step has completed, and beta coefficient / effect size / standard deviation images appear to have been generated (you should be able to see these in your output directory). Where your program has stalled is in calculation of the t-values and CFE enhanced statistics for the default permutation, before generation of the null distribution commences.

In my experience, this normally happens when the design matrix becomes sufficiently ill-conditioned such that the t-values in some fixels become erroneously large (e.g. 10^17), and CFE then effectively stalls as it integrates from 0 to 10^17 in 0.1 increments.

Normally in such a circumstance I would advise calculating the condition number of your design matrix. But if you are genuinely seeing this behaviour in a toy example with 4 subjects and 2 factors only, what’s more likely happening is that somewhere within your template are fixels that contain null values for some / all subjects, in such a way that the GLM code misbehaves and generates enormous / non-finite t-values, subsequently making CFE take forever. What you can do is report the output of mrstats on those fixel data files that fixelcfestats has successfully generated.

Rob


#42

Hi,

Besides memory problems, there is something weird with your design and contrast matrices, because it seems the hypothesis you are testing is that the mean value in the first group (the first beta) is zero and not a comparison of group means.
I would be tempted to think that it can cause zeros in the standard deviation of the residuals, and as @rsmith says, the code for the statistics would misbehaves wit the zero division when calculating the t-values and standardized effects.


#43

OK! I will redesign my matrix and try again!

Thanks for your help!

Regards,
Pinyi Wang


#44

Maybe my sample size is too small and redesign my matrix and try again!

Thanks for your reply.

Regards,
Pinyi Wang


#45

After fiddling wih GLM for some time :smile:, I have decided to take a more conservative approach :sunglasses:

So, here it goes> How would I obtain percentage effect for this setup?

                  GI   Group  Site of scan
Control, site 1 | +1 |   +1   |  -1  |          
Patient, site 1 | +1 |   -1   |  -1  |    
Control, site 2 | +1 |   +1   |  +1  |   
Patient, site 2 | +1 |   -1   |  +1  | 
                ----------------------
                  b0     b1      b2

Contrast matrix
0 +1 0

Regards,
karthik


#46

(Note: Edited b1,b2,b3 -> b0,b1,b2 since that’s how the output files are named)

b1 encodes twice the difference between controls and patients. So halving this image gives you the difference between controls and patients.

b0 encodes, essentially, the value of your quantitative metric for a hypothetical subject “in group zero”, “scanned at site zero” (this is the definition of the “global intercept”).

So 0.5xb1/b0 would give you the percentage effect relative to the population mean.

If you really want the percentage effect to be expressed relative to the control group, then you need to be able to calculate the mean of the control group. This is the GLM’s predicted value for a hypothetical subject for which their entry in the design matrix would be: +1 +1 0; that is, a member of “group +1” (control group), but site of scan unknown. So this would be b0+b1, and so the percentage effect relative to controls would be: 0.5xb1/(b0+b1).


#47

Thanks a lot Rob!

That really helps :sunglasses::wink: