I’m trying to build an MRTrix-based diffusion processing pipeline for the HCP dataset. However, I’m running into the problem that Tckgen is taking prohibitively long.
In the line of code below, when I set $nTracts to “1M” it took 9.5 hours to run for a single subject on 46 threads. Assuming that processing time increases linearly with requested number of tracts, that’d mean 95 hours for a single subject using the recommended 10M tracts. Scaling this up to a large number of subjects would become computationally unfeasible.
tckgen -nthreads 46 “$tmp”/DWI_FOD_WM.mif “$tmp”/DWI_hollander_tractogram100.tck -act “$tmp”/5TT.mif -backtrack -crop_at_gmwmi -seed_dynamic “$tmp”/DWI_FOD_WM.mif -maxlength 250 -number “$nTracts” -step .8 -cutoff 0.06
Am I doing something wrong, or is this the expected processing time for a single subject? If yes, how could I change this code to become feasible for a dataset as large as the HCP.