ISMRM tutorial tckgen problem

Hi everyone!
I am trying to run the tckgen WM_FODs.mif 100M.tck -act 5TT.mif -backtrack -crop_at_gmwmi -seed_dynamic WM_FODs.mif -maxlength 250 -select 100M -cutoff 0.06 command and it is taking a lot.
I started 6 hours ago and in 6 hours it is at 6%.

Is it because my computer does not have a lot of RAM (8Gb)? Because I read a thread of someone who had a pc with 32Gb RAM and the command failed.

I am trying to do the ISMRM tutorial, do you have any suggestion?

Thanks,
Carlotta.

After almost 20 hours it is at 20%, is there something I can do to accelerate it?

Did some of you had the same problem?

Thank you, Carlotta.

The issue here is probably two-fold:

  1. the HCP data is enormous, and requires a fair bit of RAM. The data alone when uncompressed takes up 4GB. Any processing of that data that requires additional RAM will quickly overwhelm an 8GB system. Even something as simple as a straight copy may require a full 8GB (4GB to hold the input, 4GB to hold the output), which would already be problematic on your system… It also takes a lot of time to process such an enormous dataset, even if you had enough RAM. You’d want to look into a system with a lot of CPU grunt to help reduce execution times. Personally, I’d recommend you start with more ‘standard’ data: an average dataset should be in the region of 100-200 MB, which is a lot more manageable.
  • For tckgen, you actually generally don’t need much RAM above that required to hold the input data. So you should be fine from that point of view – apart from the fact that you’re using the -seed_dynamic option… That option does require a bit more RAM to hold the current streamline density, and also takes longer due to the need to update that density image as tracking proceeds. The particulars of the multi-threading model required to make that work means that you often won’t get full CPU usage with that option enabled (@rsmith will correct me if I’m wrong). So there’s a good chance that tracking without that option might be considerably faster.

Hope this helps…
Donald.

Thanks Donald, this is really helpful!

I understand what you wrote and I new that I could have problems processing this 4Gb data, but I wanted to start practicing with MRtrix and these were the data I found. I will soon have more standard data that will be 100-200MB so I will try with those too.

For tckgen do you suggest me to try without the option -seed_dynamic then? What option could I use instead of -seed_dynamic?

Thank you again,
Carlotta.

As always, there’s no right or wrong here, it depends on what you’re trying to do. There’s plenty of options for seeding, the simplest is probably -seed_image or -seed_sphere – but you’ll need to think about which option best matches your research question.

Thanks for your answer!

I was simply trying to replicate the tutorial because I just started using MRtrix and I wanted to understand how this pipeline works.
I don’t have a research purpose, is there an option that can do the same thing using the data I already generated with the other commands of the tutorial?

Thank you again,
Carlotta.

The particulars of the multi-threading model required to make [ dynamic seeding] work means that you often won’t get full CPU usage with that option enabled (@rsmith will correct me if I’m wrong).

I think it gets full CPU usage up to a certain point. As in, an i7 will get 800%, but you might not get 3200% on a dual Xeon 16-core. @Carlotta_Fabris It would be worth running top: If the slow execution is indeed due to inadequate RAM, the CPU usage should be significantly lower than (100% x number_of_threads) for your system.

It does however require additional track-to-fixel mapping computations, which means that not all of that CPU usage is going to streamlines propagation.

I am trying to do the ISMRM tutorial, do you have any suggestion?

If changing from dynamic seeding to something else doesn’t solve the problem, you could alternatively try down-sampling the DWI / FOD data to a lower spatial resolution.

Another trick that might reduce RAM usage while preserving the high spatial resolution is maskcrop. Sometimes DWIs have a lot of “dead space” on either side of the brain, that’s either zero-filled or effectively contains noise outside the brain. By reducing the image FoV to only extend as far as is required to encompass the brain, the un-compressed image size can be reduced by as much as 50%.

Thank you @rsmith

I solved the problem with -select 1M

I just need 1M fibers to run my analysis!

Thank you again,
Carlotta.