Dear MRtrix developers and experts, I’m very interested in performing tractography analyses and I’m looking forward to use MRtrix3 since it seems an amazing open software.
I’m using a Windows8 computer with the following specs:
CPU: Intel Core i7 (4th Gen) 4770 / 3.4 GHz
Number of Cores: Quad-Core
RAM: 16 GB
I’ve also installed a Linux Ubuntu Virutal Machine with virtual box (8GB of RAM).
I have some basic questions about the system requirements.
I’m wondering if our machine is powerful enough for performing tractopraphy analyses with MRtrix3. Should we add 16 GB of RAM (since the max supported size is 32 GB)? Would MRtrix work even on the VM despite just 8GB of available RAM? Would it be a better idea to run MRtrix on a server?
If our machine could handle the analyses, how much time would be required per subject for a standard processing with our computational power?
How big is the size of the MRtrix folder after the installation? (I have just 60 GB of free disk space on my Linux VM).
Yes, with the possible exception of tcksift if running with more than ~10 million streamlines – it’s quite memory-intensive…
The other memory-hungry application is fixelcfestats (the statistics for fixel-based analysis), which can require 64-128GB…
I’d say try it without, and upgrade if you need to. However, if you know you intend to use tcksift / tcksift2 (which I’d recommend), I’d install the extra RAM now.
Yes, within the same limitations as above, and depending on your input data. For instance, processing HCP data would be problematic on 8GB, since the raw DWI extracts to 4GB once uncompressed – so even converting the data from NIfTI to .mif format might be an issue. But for more standard datasets, 8GB is typically ample, until you need to run tcksift…
Note also that display within a VM is very problematic / near-impossible. See this recent thread on the topic (it’s running on docker, but the same issues apply).
It depends what kind of server… If you have access to a large memory system, then you could just run those parts of your analysis on that system, leaving it free for other users otherwise. But otherwise, just about all of your analysis should be fine to run locally.
That depends entirely on what data you have, and what you intend to do with it… What do you mean by ‘standard processing’? For the preprocessing, the bottleneck is typically running eddy, and that is massively accelerated if you have a decent NVidia CUDA-capable GPU installed and available (which means not running within the Virtual Machine). Otherwise, if you’re talking about generating ~10 milliion streamlines per subject and using SIFT2 to generate a connectome, I’d say you’d be looking at somewhere in the region of 6-12 hours of processing per subject – but that depends entirely on the specifics of your data, particularly its resolution.
MRtrix3 itself takes up about 75MB on my system, but its dependencies will also take some space. Qt5 also takes up around 70MB on my system – but it can take up quite a bit more on other systems, depending on how it’s packaged. I don’t expect it to take more than half a GB though.