Dwi2tensor FA differs from dtifit FA

I have ran my (unprocessed) diffusion data through dtifit and through dwi2tensor.

dwi2mask -fslgrad sub.{bvec,bval} sub_dwi.nii.gz sub_mask.nii.gz

# DTI fit
dtifit -k sub_dwi.nii.gz -r sub.bvec -b sub.bval -m sub_mask.nii.gz -V -w -o fsub_dtifit/sub_dtifit

# dwi2tensor + tensor2metric
dwi2tensor -mask sub_mask.nii.gz sub.mif sub_dwitensor/sub_tensor.mif
tensor2metric sub_dwitensor/sub_tensor.mif -fa sub_dwitensor/sub_fa.mif -mask sub_mask.nii.gz

Unfortunately, the FA maps differ substantially in appearance.

*dtifit FA

dwi2tensor + tensor2metric

In a previous post, I mentioned that our original, multishell bvecs for our sequence, ABCD GE RSI (or pe_polar_flex), were being scaled on input to be uniform (i.e. single shell) when mrtrix3 commands interact with it. Could the bvec rescaling be causing this issue?

1 Like

OK, that’s clearly a bit more than just a minor discrepancy… But not surprising if all shells are being merged into one. What is surprising to me is that if that’s the case, then FSL’s dtifit should also be affected, which doesn’t seem to be the case here. So I’m guessing that for dtifit, you’re using input data that has been converted correctly, but for MRtrix, you’re using data converted directly from DICOMs, which will not be converted correctly, is that correct?

If so, why not also use the NIfTIs for the MRtrix computation? All MRtrix commands supports NIfTI natively. Something like this ought to do the job:

# dwi2tensor + tensor2metric
dwi2tensor -mask sub_mask.nii.gz sub_dwi.nii.gz -fslgrad sub.bvec sub.bval sub_dwitensor/sub_tensor.mif
tensor2metric sub_dwitensor/sub_tensor.mif -fa sub_dwitensor/sub_fa.mif -mask sub_mask.nii.gz

Ah, my apologies for not mentioning before, conversion to mif for dwi2tensor + tensor2metric was done directly from NIfTI + bval / bvec.

Even if I refrain from converting to .mif and feed the original bvec / bval to dwi2tensor, the same results apply. I wonder if this is related to the automatic bval-scaling you described before.

Looks like it might be the automatic scaling.

If I do the initial nifti to mif conversion without bvalue scaling, FA results become comparable:

dtifit

mrconvert (no bval scaling) dwi2tensor + tensor2metric

Though there are some discrepancies.

$ mrstats sub_dtifit/sub_dtifit_FA.nii.gz 
mrstats: [100%] uncompressing image "sub_dtifit/sub_dtifit_FA.nii.gz"
      volume       mean     median        std        min        max      count
       [ 0 ]  0.0564785          0   0.144133          0    1.22461     868352

mrstats sub_no_bval_scaling_dwitensor/sub_fa.mif 
      volume       mean     median        std        min        max      count
       [ 0 ]  0.0597371          0   0.155561          0    1.22473     868352

OK, can I ask for a few more details on what you mean here - preferably with the exact command-line that you used for that step (and the equivalent with b-value scaling) so I know exactly what was done?

That’s expected. By default, FSL performs a weighted least-squares fit, whereas we would perform an iteratively-reweighted least-squares fit (initialised from the weighted least-squares fit). Adding -iter 0 should give you pretty much the same results (though again, not exactly the same due to differences in how small/negative values will be handled).

Sure thing.

Bvalue scaling off

mrconvert -bvalue_scaling no -fslgrad sub.bvec sub.bval sub_dwi.nii.gz sub_no_bval_scaling.mif
dwi2tensor -mask sub_mask.nii.gz sub_no_bval_scaling.mif sub_no_bval_scaling_dwitensor/sub_tensor.mif
tensor2metric sub_no_bval_scaling_dwitensor/sub_tensor.mif -fa sub_no_bval_scaling_dwitensor/sub_fa.mif -mask sub_mask.nii.gz

Original post, with bvalue scaling

mrconvert -fslgrad sub.bvec sub.bval sub_dwi.nii.gz sub.mif
dwi2tensor -mask sub_mask.nii.gz sub.mif sub_dwitensor/sub_tensor.mif
tensor2metric sub_no_bval_dwitensor/sub_tensor.mif -fa sub_dwitensor/sub_fa.mif -mask sub_mask.nii.gz

Second post, no mrconvert, directly from NIfTI + bval / bvec

dwi2tensor -mask sub_mask.nii.gz -fslgrad sub.bvec sub.bval sub_dwi.nii.gz sub_no_mrconvert_dwitensor/sub_tensor.mif
tensor2metric sub_no_mrconvert_dwitensor/sub_tensor.mif -fa sub_no_mrconvert_dwitensor/sub_fa.mif -mask sub_mask.nii.gz

Ok, thanks. Still struggling to figure out what’s going on… Any chance you could share your bvecs & bvals with me so I can investigate?

Sure thing, I’ll send you a DM shortly.

Ok, thanks for sending that through. I went through your earlier posts again, and saw that I had misunderstood what you’d said. It all makes sense now…

Yes, it is the b-value scaling. As you’ll see from the docs on b-value scaling, on many scanners, the only (or certainly the easiest) way to get a multi-shell sequence running is to provide a gradient table with a single b-value and modulate the amplitude of each direction vector to achieve the right gradient amplitude for the desired b-value. Since this is quite a common occurrence, MRtrix will by default apply the reverse operation (modulate the nominal b-value according to the amplitude of the direction vector) when it detects significant deviations in the amplitudes of the direction vectors, assuming that if these amplitudes vary more than expected due to rounding errors, it’s probably because they’ve been used to modulate the b-value.

Unfortunately, in your case it looks like the amplitude of the bvecs were Indeed used to modulate the b-value as I described above, but the bvals have already been corrected for that. Which means that our correction to handle this now modulates the b-values again, when they’ve already been corrected.

The fix is to disable the b-value scaling when importing the data, as you found out. In this case, we recommended a frank conversion so that the converted data has a correct and fully self-consistent DW gradient table, which won’t cause any further issues downstream. In your case, the output data should now have unit amplitude direction vectors, while preserving the b-values as specified in the bvals file. Ensuring the direction vectors have unit amplitude ensures that b-value scaling won’t be triggered again at some later stage in processing

Anyway, I hope it all makes sense… If you’re really interested, you’ll find our extensive musings on the topic scattered about in various discussions on our GitHub repo…

2 Likes