Appropriate RAM size for 100M tracks calculation

Hi, everyone.
I tried to process HCP data as described here:
but at this step: “tcksift 100M.tck WM_FODs.mif 10M_SIFT.tck -act 5TT.mif -term_number 10M_” my PC was run out of free RAM. My PC config is: 32GB RAM, 64 bit Linux, two dual-core processors. My aim is to achieve reference fiber tracks with the highest possible quality, so I’m thinking about bying some extra RAM. I know, that I can change some settings, which’ll reduce memory usage, but in this case fiber tracking quality will reduce too.
Could you tell me, is 64Gb of RAM enough for processing by ISMRM HCP guideline?

Yes, tcksift is pretty RAM-intensive. In any case, maybe this post answers your question?

Also issue is described in the documentation here.

I’d expect memory usage of HCP 1.25mm data to be ~ 3 times that of 2.5mm data (each streamline intersects ~ 3 times as many voxels). So approximately 1GB per million streamlines, give or take. In which case you may fall short even with 64GB.

On your terminal where the command crashed, you should be able to see what percentage of streamlines were mapped before the memory error occurred; this gives you an estimate of how many streamlines were loaded. Contrast this with the amount of memory available on your system when the command was run (32GB minus whatever other processes were using), and you can estimate the RAM required per million streamlines, and hence how much RAM you would need in order to process your desired number of 100 million streamlines.

J-Donald, Robert, many thanks for your replies. Everything becomes clear for me.
I used 1.25mm data, so I should reduce spatial resolution or decrease number of streamlines on PC with 64Gb of RAM.