FBA analysis pipeline help

Hi Rob,

I was to trying to visualize significant streamlines (p< 0.01) by color coding them with the percentage effect. But, I don’t see the changes as expected…

I do visualize it when I use the abs effect for color coding instead. Is there anything I’m missing out on :face_with_raised_eyebrow:? (using MRtrix 3.0_RC2)

This information is difficult for me to interpret, since I don’t know how the visualisation differs from what you expect, and the images have been too heavily compressed so I can’t use the contents of the GUI to assess if there’s anything fundamentally wrong.

Ah! My bad…I hope it’s better this time :stuck_out_tongue_closed_eyes: Basically, I want to visualize significant fixels in FDC (pic 1) as streamlines color coded by percentage effect (pic 2) using the .tsf files for percentage effect & p-value. I would expect to see a percentage decrease in the tracts (which I can’t visualize for some reason!). So, I was just wondering if I’m loading the images the right way :thinking:


Hello all,

I am performing FBA (Fibre density and cross-section - Single-tissue CSD) using the intructions: https://mrtrix.readthedocs.io/en/latest/fixel_based_analysis/st_fibre_density_cross-section.html

I have 4 subjects, 2 controls, 2 patients:
image
image
image

I want to Perform statistical analysis of FD, FC, and FDC as the step 22, my command is:
fixelcfestats fd files.txt design_matrix.txt contrast_matrix.txt tracks_100_thousand_sift.tck stats_fd

but i get into trouble as follows:


The step of statistical analysis of FD does not seem to have been completed (I’ve been waiting more than two days:disappointed:),.
Do you have any ideas of what could be wrong?

Regards,
Pinyi Wang

Hi @PinyiWang,

The fixelcfestats can be pretty taxing on your system in terms of CPU usage, but more importantly: memory requirements. As it’s still running at this stage, how is your memory usage (and swap space usage) going on your system?

Also, since the memory requirements are so particularly high for this, definitely make sure that anything else requiring (a lot of) memory is shut down… That would e.g. include browsers and programs that load a lot of data (e.g. Matlab with a large workspace, etc…) in particular. But anything that’s using a lot of memory in general really.

Cheers,
Thijs

Hi @ThijsDhollander

Thanks for your response.

My system is ubuntu 16.04 on the VMware Workstation, 24GB of memory, 8 CPU, 300GB hard disk space.

Maybe that is not enough for the usage of fixelcfestats…

Regards,
Pinyi Wang

Yes, I’m not 100% sure, but I wouldn’t be surprised if you ran out of memory and the process started swapping. It won’t crash immediately, but it may start to take forever essentially, to either eventually (after a few years; not kidding) finish, or still crash because all the swap space is gone as well.

The only way to be really sure that this is the issue, would be to check the memory/swap state of your system after having it run for at least a good day (24 hours) or so. By then, it would become clear whether it’s hitting the limits of your RAM memory or not.

OK, i will check the memory of my system after having it run for 24 hours.

Perhaps I should buy a new system…

Thanks again for your reply.

Regards,
Pinyi Wang

That’s probably going to end up being the solution indeed. For a typical fixel-based analysis, where you would e.g. have up to 500000 fixels, you need 128GB of RAM to be able to run such a thing. The memory requirements scale roughly quadratically in function of the number of fixels (and as we our discovering ourselves – live – over here, a number of other things further influence this). Even with 64GB of RAM, it may be hard. Rather than potentially investing a lot of money in hardware, it may be worthwhile exploring availability of a cluster at your institute, or institutes that you know in your community; unless you’re expecting to do this kind of analysis very routinely, and want to explore a lot of different designs, etc…

Yes, our computer cluster platform is under construction right now…Hope it can be finished as fast as possible…
Anyway, thanks for your advice!!!

Regards,
Pinyi Wang

1 Like

No worries; you’re welcome! :wink:

I would expect to see a percentage decrease in the tracts (which I can’t visualize for some reason!).

There are a couple of things that would be worth looking at here:

  • In the lower image, you are setting the colour bar range from 0 to 40. However from the relevant documentation page, there are two different equations provided; the first is relevant for FDC and includes multiplication by 100 to get a genuine “percentage”, whereas the second is relevant only for FC and does not include that multiplication step. Make sure you have used the first equation.

  • The example makes certain assumptions about the structure of your design matrix and the arrangement of factors within it, since it directly reads from image beta1.mif. Therefore you need to cross-check the actual mathematical operation this example is performing with your actual experiment and ensure that it is an appropriate expression.

  • You should also check the sign of the effect and any derived calculations thereof.

but i get into trouble as follows … The step of statistical analysis of FD does not seem to have been completed (I’ve been waiting more than two days) … Do you have any ideas of what could be wrong?

Throwing an alternative possibility into the mix based on my own experience:

Generally if the system runs out of memory due to the size of the fixel-fixel connectivity matrix, this will occur while the terminal still displays:

fixelcfestats: [ xx%] pre-computing fixel-fixel connectivity

In your case, this step has completed, and beta coefficient / effect size / standard deviation images appear to have been generated (you should be able to see these in your output directory). Where your program has stalled is in calculation of the t-values and CFE enhanced statistics for the default permutation, before generation of the null distribution commences.

In my experience, this normally happens when the design matrix becomes sufficiently ill-conditioned such that the t-values in some fixels become erroneously large (e.g. 10^17), and CFE then effectively stalls as it integrates from 0 to 10^17 in 0.1 increments.

Normally in such a circumstance I would advise calculating the condition number of your design matrix. But if you are genuinely seeing this behaviour in a toy example with 4 subjects and 2 factors only, what’s more likely happening is that somewhere within your template are fixels that contain null values for some / all subjects, in such a way that the GLM code misbehaves and generates enormous / non-finite t-values, subsequently making CFE take forever. What you can do is report the output of mrstats on those fixel data files that fixelcfestats has successfully generated.

Rob

1 Like

Hi,

Besides memory problems, there is something weird with your design and contrast matrices, because it seems the hypothesis you are testing is that the mean value in the first group (the first beta) is zero and not a comparison of group means.
I would be tempted to think that it can cause zeros in the standard deviation of the residuals, and as @rsmith says, the code for the statistics would misbehaves wit the zero division when calculating the t-values and standardized effects.

1 Like

OK! I will redesign my matrix and try again!

Thanks for your help!

Regards,
Pinyi Wang

Maybe my sample size is too small and redesign my matrix and try again!

Thanks for your reply.

Regards,
Pinyi Wang

After fiddling wih GLM for some time :smile:, I have decided to take a more conservative approach :sunglasses:

So, here it goes> How would I obtain percentage effect for this setup?

                  GI   Group  Site of scan
Control, site 1 | +1 |   +1   |  -1  |          
Patient, site 1 | +1 |   -1   |  -1  |    
Control, site 2 | +1 |   +1   |  +1  |   
Patient, site 2 | +1 |   -1   |  +1  | 
                ----------------------
                  b0     b1      b2

Contrast matrix
0 +1 0

Regards,
karthik

(Note: Edited b1,b2,b3 -> b0,b1,b2 since that’s how the output files are named)

b1 encodes twice the difference between controls and patients. So halving this image gives you the difference between controls and patients.

b0 encodes, essentially, the value of your quantitative metric for a hypothetical subject “in group zero”, “scanned at site zero” (this is the definition of the “global intercept”).

So 0.5xb1/b0 would give you the percentage effect relative to the population mean.

If you really want the percentage effect to be expressed relative to the control group, then you need to be able to calculate the mean of the control group. This is the GLM’s predicted value for a hypothetical subject for which their entry in the design matrix would be: +1 +1 0; that is, a member of “group +1” (control group), but site of scan unknown. So this would be b0+b1, and so the percentage effect relative to controls would be: 0.5xb1/(b0+b1).

Thanks a lot Rob!

That really helps :sunglasses::wink:

A post was split to a new topic: Longitudinal FBA: small absolute / large standard effect