[SOLVED] Dark blocks in -fod output from tckglobal

Dear MRtrix3 community,

I’ve just ran tckglobal on a HCP_DB subject with the latest software version on Ubuntu 12.04.
Input responses are from the script dwi2response “msmt_5tt”.
The resulting FOD seems wrong, with a checkered effect, leading to wrong streamlines:

However, the FOD image I have got from msdwi2fod with the same input responses seems fine:

Do you have any suggestion to fix my procedure?

Best,

Marco

Hi @Marco_Aiello,

That looks a bit suspicious indeed. It might be helpful if you provide a coronal and a sagittal slice as well (of the tckglobal output).

Apart from that, you have to understand that the -fod output from tckglobal isn’t a voxelwise independent FOD like the one from, e.g., dwi2fod or msdwi2fod. It’s actually much closer to a track orientation distribution, computed on the track segments that tckglobal fits to the data, using the same method that tckmap uses to compute the track orientation distribution. So it also obeys the modelling assumptions that the global tractography imposes via various internal energy mechanisms (much like the tracking constraints that other tractography algorithms use, such as step size (segment length in this case) or curvature thresholds (internal energy / smoothness in this case)).

So it might also be interesting to show us the output from tckmap using the -tod option, as applied to the tractogram output by tckglobal; to identify if there’s anything odd with your actual tracks, or if it has more to do with the track mapping code implemented in tckglobal

Hi Marco,

This is an issue with my multi-threaded implementation, which I’ve recently seen popping up a few times with other users. A quick way to get rid of it would thus be to set -nthreads 1, although this will of course increase the run time.

Nevertheless, a user recently reported me that this issue only occurs when the particle length (-length, default 1mm) is too large for the voxel size. I think it will disappear if you set length < 2*vox, for sure with length < vox. In any case, setting the particle length larger than this isn’t a good idea anyway, because my implementation concentrates the entire “signal contribution” in the particle midpoint. Did you change the particle length in your setup?

In any case, our paper includes results on the HCP data, and all parameters in tckglobal default to the settings used in that paper. The only exception is the number of iterations, which you will need to increase to 10^9 iterations for good results in a full brain.

Cheers,

Daan

Hi Daan,

thank you!
I’ve checked that the length was indeed the default, 1mm, < voxel size as you prescribe.
Now is running with -nthreads 1, I will promptly update you on the results.

Thank you again,

Marco

Hi Marco,

OK, good to know. If you’re already running with length 1mm, then this is likely a bug in the multithreading code and I need to fix it. To be precise, upon each proposal a cubical region around the position of the segment is “locked” until the proposal completes. The effect you describe occurs when, in some cases, the lock is not properly released. The real difficulty, of course, is to find out in exactly which case this occurs.

So, first of all, let’s confirm that the effect you observe indeed disappears with -nthreads 1. Thanks for doing that right away. In addition, can you tell me how many threads you were using by default?

What I find interesting is that I haven’t seen this issue on my own system in a long time. At least not since I pushed code that I thought would fix it. You’re using the latest version, so I wonder if it could somehow be system bound, maybe related to the posix library.

Cheers,
Daan

Hi Daan,

With option -nthreads 1 it works fine with good timing! The number of threads is not set by default in my configuration. I agree that the problem might be related to the system bound, let me know if you need something else.

Thank you again,
Marco

@dchristiaens, this is unlikely to be due to a buggy POSIX threads library - this is such a basic part of a modern Linux system, it would throw up all kinds of problems. Unfortunately, debugging multi-threading issues is ridiculously hard, and most likely OS / hardware dependent. It may well be that the MacOSX implementation behaves subtly differently from the Linux implementation, for instance. We also find that a lot of subtle race conditions disappear when running debug code, probably because it runs so much slower…

One option if Marco if agreeable to it, is for Marco to share the data with you, along with the exact command issued - you’d at least be able to see whether the issue is reproducible at your end, which would give you a starting point for debugging…

Hi Marco,

Excellent, thanks for testing! Are you happy with using the single-threaded code for now, until I can implement a proper fix? I’ll be in touch if I need testing. In the mean time, I’ll document this on Github too.

Regarding the no. threads, MRtrix uses all CPU cores by default. How many cores do you have on your system? I’d like to know if it is indeed system bound, or if it could be a general race condition that I’ve just been fortunate to escape so far. The likelihood of a race condition would increase (quadratically?) with the no. threads. Are you by any chance running on a massive cluster with e.g. 64 cores?

Cheers,
Daan

I don’t think it’s data related, given that it’s a fairly standard HCP scan. Nevertheless, it wouldn’t hurt if you could send me the subject ID.

The exact command you used would be useful, yes. I am assuming you used all the defaults?

The command was indeed the default, the subject ID is 899885.

It was ran with a regular PC (Intel Xeon 8 cores).

For me it will be fine to work with the single-threaded code, feel free to ask me for anything you need.

Best

Marco

Thanks for reporting back. I filed Github issue 477 until I implement a fix, and changed the title of this post to better reflect the nature of the problem.

Hi Marco,

This issue should be now fixed by pull request #824, which has been merged to master. Sorry it took such an embarissingly long time… Let me know if you experience any more issues.

Cheers,
Daan