According to the FBA multi-tissue CSD pipeline documentation, “for about 500,000 fixels in the template analysis fixel mask and a typical tractogram defining the pairwise connectivity between fixels, 128GB of RAM is a typical memory requirement.”
Now, I checked my fixel mask, and I have 682,874 fixels, yet I my computer has only 32GB of RAM. My question is, is it practically impossible to perform the fixel analysis in my scenario, or will it just be sluggish and take a long time?
I’ve also noticed that the more the analysis progresses, the slower the rate of progression gets. I’m assuming that’s because the RAM is holding on to whatever is being calculated and adding onto it? If so, does that mean that there’s a higher chance of crashing the more progress is made?
I understand there are ways to reduce RAM requirements, but I was just wondering if its possible to go through with the analysis without resorting to that.
Sorry for the basic question, I’m not very computer savvy
Yes and no. I assume you mean you system is equipped with 32GB of physical RAM. Most OS’s will also allow the use of swap space as additional virtual memory, which means applications technically have access to a lot more RAM than the computer is physically equipped with. You can query this with the free -h command on Linux.
So in theory, you could set up your system with an additional 128GB of swap space, and the command should be able to run. However, you really don’t want to do this, since access to spinning disks is many orders of magnitude slower than live RAM. If your analysis starts relying on the swap, you’ll find your system starts thrashing, and become essentially unusable. While the command will eventually complete, you’ll probably be in a retirement home by that point… If the computer hasn’t had a hardware failure before then. Personally, whenever I see my physical RAM usage get close to saturation, I’ll rush to interrupt the process (with Ctrl-C) before it start thrashing, otherwise it can take quite a while for the system to recover…
That might be due to the system starting to use swap space – see above.
In general, I don’t think there’s any way around making sure the whole analysis can be performed purely within physical RAM… We might be able to get around this at some point, but it’ll require a complete overhaul of the command to do this – assuming it can be done at all. Given the relatively low cost of RAM these days, I doubt this is something we’ll be investigating any time soon…