My team has recently upgraded to the latest version
== mrview 3.0_RC2-65-g5d1eb1bb ==
64 bit release version, built Sep 4 2017, using Eigen 3.2.92
However, when I re-run some old code (from July 2017), in particular, dwi2response, I get nan’s whereas I didn’t with the old version (0.3.15-522-gbc67f1a1).
Here is the command which I ran both in July and recently:
dwi2response dhollander -lmax 4,4 -mask mask.nii.gz -fslgrad bvecs bvals -force dwi.nii.gz rf_wm.txt rf_gm.txt rf_csf.txt
-I would also like to point out that mask.nii.gz and dwi.nii.gz have not been modified since July when they were first generated.
Here is my response function from July (two b-values 0,1000 with 2, and 30 directions respectively).
4457.664286 1052.966317 -1666.484358
2124.901589 -858.5620121 185.5793654
And here is the response function from the recent upgrade
-nan -nan -nan
2176.443279262384 -910.298020249314 203.657380058315
Does anyone have an idea why this could be happening? Is it a bug or is it a problem with my data?
The problem is in your usage of the
dwi2response dhollander naturally extracts an anisotropic response for WM, and an isotropic one for GM and CSF. The
-lmax option, when provided, allows to override the automatic lmax settings for the anisotropic WM response, with a provided value per b-value in your dataset. The automatic lmax setting for the anisotropic WM response is lmax=0 for the b=0 data, and lmax=10 for all other b-values present. This makes sense, since the b=0 data is not expected to show any anisotropy (and if it appears to do so, it’s due to noise or other issues). For the other b-values, you don’t have to worry about lmax=10 appearing to be too high with respect to the number of gradient directions in your data: the response function is estimated by a couple of tens to hundreds of voxels at once, using all of their data. So even though you “only” have 30 directions at b=1000, if the response function ends up being estimated from 100 or so voxels, the fitting algorithm will have 30000 data points at its disposal. Note that for lmax=10, the response function (which is represented only in zonal harmonics, not full spherical harmonics) only needs 6 parameters. 30000 datapoints to fit 6 parameters is pretty safe.
I’m actually surprised (then again, not that much) that you’d get values for b=0 up to lmax=4 before. While your b=1000 response part looks as expected:
…the b=0 response part you got before looks rather funky (which isn’t good):
So well, the solution is simple: just do away with that
-lmax 4,4 : it isn’t needed and the first
4 in there is actually wrong (should’ve been a
0 at best). Given how robust response function estimation has become with our recent strategies, the
-lmax option should really only be considered in very special scenarios (or for some kinds of technical or theoretical experiments). For any other scenario, simply don’t specify the
-lmax option; things work out better with the defaults automatically.
Also, see some more information in this post: -lmax for dwi2response and dwi2fod , where a user ran into other confusion regarding the
-lmax option. The solution was also to simply not specify the option to begin with.
Just one further clarification:
The reason the behaviour has changed for you is because in MRtrix3 Release Candidate 1,
dwi2response was altered to use the new
amp2response command, as opposed to
sh2response. The details of this command can be found here.
sh2response simply rotates the SH decomposition of each single-fibre voxel so that the fibre is aligned with the z-axis, then averages across voxels. In this situation, l>0, m=0 terms will be dominated by noise: the signal intensity shouldn’t be influenced by fibre orientation at b=0!
amp2response is encountering a fundamental issue in obtaining a solution. It uses the signals from all single-fibre voxels at once; but it’s facing a conflict between having a signal intensity that shouldn’t vary as a function of angle from the fibre orientation, and a constraint that forces monotonicity as a function of angle from the fibre orientation (details in the abstract linked above). Hence the
So I’d argue that
NaN is in fact the more sensible answer of the two, since it’s highlighted the error in your -
Thank you so much! This solved my problem.