Yes, with the possible exception of tcksift
if running with more than ~10 million streamlines – it’s quite memory-intensive…
The other memory-hungry application is fixelcfestats
(the statistics for fixel-based analysis), which can require 64-128GB…
I’d say try it without, and upgrade if you need to. However, if you know you intend to use tcksift
/ tcksift2
(which I’d recommend), I’d install the extra RAM now.
Yes, within the same limitations as above, and depending on your input data. For instance, processing HCP data would be problematic on 8GB, since the raw DWI extracts to 4GB once uncompressed – so even converting the data from NIfTI to .mif format might be an issue. But for more standard datasets, 8GB is typically ample, until you need to run tcksift
…
Note also that display within a VM is very problematic / near-impossible. See this recent thread on the topic (it’s running on docker, but the same issues apply).
It depends what kind of server… If you have access to a large memory system, then you could just run those parts of your analysis on that system, leaving it free for other users otherwise. But otherwise, just about all of your analysis should be fine to run locally.
That depends entirely on what data you have, and what you intend to do with it… What do you mean by ‘standard processing’? For the preprocessing, the bottleneck is typically running eddy
, and that is massively accelerated if you have a decent NVidia CUDA-capable GPU installed and available (which means not running within the Virtual Machine). Otherwise, if you’re talking about generating ~10 milliion streamlines per subject and using SIFT2 to generate a connectome, I’d say you’d be looking at somewhere in the region of 6-12 hours of processing per subject – but that depends entirely on the specifics of your data, particularly its resolution.
MRtrix3 itself takes up about 75MB on my system, but its dependencies will also take some space. Qt5 also takes up around 70MB on my system – but it can take up quite a bit more on other systems, depending on how it’s packaged. I don’t expect it to take more than half a GB though.