I am running global tractography on the ISMRM 2015 tractography challenge phantom. Unfortunately I only get a very sparse result (5k fibers with w=0.2). If I tune down the particle weight or ppot, I get mor noise but not much more fibers.
Here is my command:
tckglobal mydwi.mif wmr.txt -riso csfr.txt -riso gmr.txt -mask BrainMask.nii.gz -niter 1e9 fibers.tck -lmax 6 -nthreads 12 -force -w 0.1
Happy to hear you’re comparing tckglobal in the challenge. Are you also releasing a multi-shell HARDI version of the phantom data then? It would be of great value to evaluate this and other MSMT tractography methods.
Assuming that you are indeed working with a multi-shell version of the phantom, the numbers your get are definitely not right. For an adult brain dataset, I typically get 50k-100k tracks with w=0.1. I would expect roughly the same in your phantom.
The command parameters you’re using look fine to me. The main thing to check is the scaling of your response functions. Did you estimate these with dwi2response and do they look right? Or did you input them as your simulation model? In case of the latter, make sure that they are scaled according to the b=0 data intensity, because this scale will directly impact the track density in tckglobal. The signal response of each particle (track segment) in the reconstruction is simply the WM RF times the weight w. Hence, if all is right (and ignoring the effect of ppot for simplicity), w=0.1 will result in an average density of 10 tracks per voxel in WM. Since you are getting a very sparse result, my first guess is that your WM RF is too large. In that case, global tractography can essentially “explain” the signal with very few particles, which could explain what you’re seeing.
Alternatively, check that the voxel size is set on the order of adult brain data. It’s a stupid little thing that only matters in relation to the particle length, but I have fallen over it a few times with phantoms or with preclinical data. As a last resort, check if you get the same without multi-threading. I would be very embarrassed if it wouldn’t, given how many times I’ve thought I had fixed the issues we initially had, but it’s definitely a sensitive matter across platforms so it would be good to be sure…
thank you for your reply! This issue is with the original low-b/32 direction phantom. Nevertheless, we are also working on HCP-style phantoms. I guess we will publish them in the future.
If I remember correctly I used “dwi2response dhollander” to estimate the response functions. They work fine for CSD tracking. But if the scaling is wrong I should be able to correct this with the weight parameter, right? By the way, what happens if I use msmt CSD with single shell data? The white matter results ate least look fine and I think still less noisy than standard CSD.
What do you mean by “Alternatively, check that the voxel size is set on the order of adult brain data”?
I also tried it single threaded but that didn’t change anything. Lucky you
I noticed that I get about 80k fibers but most of them are really short and look like noise. If I apply a 20mm length threshold, I end up with about 9-10k fibers. Maybe I need to track longer, but I already used 1e10 iterations.
Aha, that explains a lot. tckglobal relies on the multi-tissue model for multi-shell data, which can easily become unstable when fitting 3 tissues (WM/GM/CSF) to 2 shells (b=0 and b=X). If you want to work with this data, rather than a HCP-style phantom, I would recommend using a 2-tissue WM/CSF-model within a WM mask.
That will be fine then. I was just wondering if you might have used the ground-truth response functions that you used for simulating the phantom. Yes, if the scaling would have been wrong you could correct this with the weight parameter, but then you would also need to modify ppot because its default (5% * w) assumes equalised scaling.
When #tissues > #shells, the tissue volume fractions are not uniquely defined without additional priors, which impedes direct MT-CSD of single-shell data. Global tractography provides a spatial prior that might help, and I’ve had limited success with single-shell data of b>2000. But low-b single shell data is very challenging for our multi-tissue model because the DWI intensity in GM is still high at low b. Therefore, I’d recommend against it…
Again, it seems like this is mute now. All I meant was that the particle length defaults to 1mm, regardless of the voxel size encoded in the image header. Hence, if the voxel size is not in the expected range of 1-3mm (e.g. in small animal data or phantoms), the particle length may need to be set differently.
Good to know, it’s a relief…
For human brain data, I’ve never seen large improvement beyond 1e9 iterations, so that should be fine.