Speeding up cfestats?

Hi there,
I’m having some difficulties with fixelcfe stats - I get stuck on this screen:


My processing computer is a few years old, but should still be fairly powerful. I followed the advice on the documentation and cut my voxel size in half so lessen the computing burden but I’m having the same issue.
Any advice would be greatly appreciated!
Ben

I’ve tried reducing the number of tracts to 800,000 with SIFT, and now I’ve even tried -nperms 10 just to see if I could get the process to complete, but it’s still getting stuck after outputting beta coefficients, effect size, and standard deviation. Wondering if there’s something simple I’m missing here

I’m not sure what’s going on here… But it looks like you’re stuck at this point in the code (the comment is a bit of a giveaway… :grimacing:). That’s a 2-year old comment though, so I’m not sure whether it’s still current. No doubt @rsmith will have a better idea where the problem might be… (?)

Thanks a lot for the reply! Yes I am still struggling with this issue, and any help with it would be much appreciated!

@rsmith do you have any feedback or advice about this?
Thanks!

Sorry for the delay; still playing catch-up…

No guarantee that the issue has anything to do with the comment linked by @jdtournier; that’s probably a vestigial remnant of development work at some point.

The main work being done in between the end of the “outputting beta coefficients…” line and the appearance of the “running permutations” line is the pre-computation of the default permutation. This involves fitting the full GLM (which is a different piece of code to that used to produce the basic properties already completed) and performing statistical enhancement. This section is also constrained to be single-threaded, which is why it can seem to be “slower” compared to how the command progresses through the permutation testing.

In my own experience, hanging at this point has most frequently been due to extreme t-values. CFE works by integrating from (by default) 0.1 to the maximum t-value in 0.1 increments. If a single fixel in your image has an erroneously large t-value, that numerical integration can take prohibitively long. By “erroneously large”, I don’t mean 10, I mean 10^30. (Yes, this can happen if mistakes are made…)

I would suggest running mrstats on those fixel data files that have been successfully generated in your output fixel directory. If that doesn’t provide insight, we’ll need to look closer…

While running low on RAM can slow this code down as well, I would expect RAM issues to result in hanging earlier at the “pre-computing fixel-fixel connectivity” step; so I don’t think reducing the number of fixels will help, unless by doing so some “problematic” fixel gets excluded from the analysis.

1 Like

Thanks for your reply @rsmith!
Could the problem be that I was trying this on a 2-patient sample (actually 1 patient, pre and post surgery). Where I got stuck, the fdc folder was created, and a tvalue.mif was made, but mrstats just revealed:
volume mean median stdev min max count
[ 0 ] 0 0 0 0 0 225866

Okay, I’ve seen a couple of people run into this hurdle.

By trying to run permutation testing on a small sample dataset in order to speed up computation, what actually happens is that the command hangs entirely when trying to generate the set of permutations, as it is mathematically impossible to have 5,000 unique permutations from 2 inputs.

In the coming update the command will instead inform the user that the target number of permutations could not be obtained, and instead the maximum possible number of unique permutations will be used (which would be 2 in this case). Such a warning would however only be the messenger of the fact that one cannot generate a non-parametric null distribution from such a small number of inputs. You can certainly run fixelcfestats with the -notest option in order to generate output images from the default permutation, but a meaningful null distribution can’t be generated.