I am running step 15 of the fixel pipeline and tcksift finishes
creating a homogeneous mask, segments the FOD, maps tracks to image and
then basically crashes my system. It still seems that tcksift is
running but only at ~5% CPU. Nothing else on my computer is running and
it’s virtually unusable. My analyses appears to still be running a day later but still seems like this process might take upwards of a week to complete. Is this normal?
Thanks in advance for the help,
Sounds like you’ve run out of RAM, and the system spends all of its time swapping RAM to disk. SIFT is very memory hungry, I suggest you monitor the RAM usage during execution, and if it’s too much, you may have to run it on a more powerful system, or figure out a way to reduce RAM usage (although the latter wouldn’t be my recommendation, it will involve some compromise on the quality of the analysis)…
Yes, swapping is the most likely culprit here. Normally
tcksift will fail if you run out of memory (this usually happens during the ‘
mapping tracks to image’ stage), but it might be swapping in your case. You can try either reducing the number of streamlines, or providing a down-sampled FOD image to
tcksift as suggested here - the latter is particularly relevant if you’re operating on up-sampled FOD data for the purpose of fixel-based analysis.
It’s also worth noting that if you’re running into a RAM issue at this point of the pipeline, you’re almost certainly going to encounter a RAM issue with the
fixelcfestats command, as its memory demands (for the default parameters / pipeline) are greater again. So you may also want to look into what computing resources you have / can be made available.
Thank you all for the help. I ended up just letting it run and although it took about a day and a half it still appears to still be working. I did get a warning message however. As far as stats go what specs in computing should I be going for. My computer has 16gb of ram so I can search for a system with either 32 or 64.
That warning does pop up every now and then. Basically, if you try to filter an excessively large number of tracks down to an excessively small number of tracks, eventually each track you remove provides an insignificant improvement to the model, and the discrete nature of a streamlines reconstruction (i.e. individual trajectories rather than a ‘field’ of connectivity) begins to have an effect on the algorithm’s performance - this is what I refer to in the SIFT paper as the ‘quantisation limit’ (precise details in the manuscript). All that warning message means is that you’ve passed that limit, but because you specifically asked the command to reach a certain number of streamlines before terminating, it is proceeding anyway.
The amount of RAM required for SIFT depends on the input number of tracks and the image resolution (and also a little on the streamlines seeding mechanism used). There’s a table in the SIFT2 paper showing execution times and RAM requirements for a few different use cases. At 2.5mm isotropic, the requirement is about 200-300MB per million streamlines.
fixelcfestats, running on an upsampled (1.25mm) template requires the better half of a 128GB machine.