Dwidenoising and mrdegibbs - optimal settings

Dear MRTrix Community,

I acquired a DWI dataset using the following parameters:

  • multi-shell including b=400, b=1000 and b=2000 s/mm2 with 32, 32 and 60 directions for each shell + 8b0s (Total of 132 volumes);
  • voxel size: 2 mm3;
  • TR/TE=6800/89 ms;
  • acceleration factors: SMS=3; GRAPPA=2.

As so, I have a few questions regarding the optimal parameters set for dwidenoising and mrdegibbs preprocessing steps.

  1. I applied dwidenoising to the images extracted directly from the scanner. Then, I tested 2 extent sizes besides of the default size. However, I got confusing results in comparison to the recommendations. See attached pdf file (noise map+rms of residuals). What are your thoughts about these results? It seems to me that the most appropriate extent to be applied is ext3. Since I am using multiband, my data has spatially varying noise, right? So, I should be using a kernel size N~M → 5x5x5 = 125 ~132 volumes, right? However, the results seem to contradict this recommendation:

For maximal SNR gain we suggest to choose N>M for which M is typically the number of DW images in the data (single or multi-shell), where N is the number of kernel elements. However, in case of spatially varying noise, it might be beneficial to select smaller sliding kernels, e.g. N~M, to balance between precision, accuracy, and resolution of the noise map.

dwidenoising_report1.pdf (313.4 KB)
dwidenoising_rmsofresiduals.pdf (301.6 KB)

  1. For mrdegibbs correction, how can I decide what is the most appropriate set of parameters (nshifts, minW, maxW)?

  2. For both corrections, do you have any quality control metric that can be used to evaluate the quality of the corrections performed or only visual inspection is advised?

Thanks!

Hi @anafouto

Regarding denoising, we have made a fair few changes between the current master and dev branches, and I strongly recommend using the dev version. We are actually using the time that freed up due to our workshop cancellation to finalise the new release, which will include these changes. Specifically, the noise estimator has been updated, as explained here.

The images you show appear to have discontinuities in the noise level in the 5x5x5 case, which is symptomatic of the issue that drove the change on dev. I therefore suggest that you update. In the dev version, the default patch size is also selected automatically to be as close to the total number of volumes as possible, i.e., a patch size of 5x5x5 (=125) would be selected for your data (132 volumes). If you do wish to stick with the current master branch, the 7x7x7 patch size would be better.

One more thing: the fact that the noise level in your data is spatially variable is indeed affected by the multiband reconstruction, but also by the coil sensitivity itself. Therefore, single-band dMRI data also has spatially varying noise.

Regarding unringing, the parameters in mrdegibbs are mostly a trade-off between precision and run time. The command defaults should be good enough for most data, so I wouldn’t worry about them unless you spot any issues.

We do not have quality control metrics for these steps (other than a simple SNR calculation), but you certainly seem too be doing a thorough job inspecting your data :slight_smile:

Cheers
Daan

Thanks for the quick reply! I will try that option! :slight_smile:

So, I already updated to dev version.

Just to clarify a few questions:

  1. I understood that this new function sets the patch size automatically based on the number of volumes, right? Is there any log file where that information is stored so I can check it?

  2. I also understood that for my case the patch size of 5 is the most appropriate. However, what I am observing is that when I run dwidenoising without setting the patch size, it is applying the patch size 7.

I ran a test where I explicitly set the extent (instead of using the automatic option):

-rms of residual with extent 5:


-rms of residuals with extent 7:

By comparing the result of extent 7 with the result from automatic set of patch size I observed no differences between them. However, I was expected that the automatic setting would choose the 5x5x5, as you said. Am I doing something wrong?
Is it problematic to identify these structures highlighted in yellow on the residuals?

  1. I forgot to ask before, but how should I deal with the PA b0s volumes (3vols)? Should I concatenate them with the 132 volumes?

Note: discontinuities in the noise level in the 5x5x5 case were solved by the dev version, as you mentioned.

Thanks again for your time!

That is correct. If you run dwidenoise with -info it will print the selected patch size. This information is not logged in the image header, but you can easily write it to a log file using shell syntax along the likes of dwidenoise ... -info >> log.txt

Very good point, I messed up in my previous answer. As the command docs state, the default is to select “the smallest isotropic patch size that exceeds the number of DW images in the input data”, which would be 7x7x7 in your case.

Either way, it is reassuring to see that with the dev version there is hardly any difference in the residuals with different patch sizes. The structure in the residuals is mainly due to CSF, which is expected due to Rician bias.

I would not concatenate them for denoising because of the geometric displacement. Instead, I would extract the same number of AP b0 images from the data before denoising and concatenate these into a RPE pair that you can pass to topup or to dwifslpreproc if that’s what you’re using.

Got it! I was looking for some flag called “verbose”. :woman_facepalming:
Thank you! Everything seems to be working perfectly!

Hi Daan!

I just have an additional question that I am wondering about:

  1. What would be the penalty (besides the computational cost) of applying a patch size of 7x7x7 (for example) to a dataset of 70 volumes? Wouldn’t the noise estimation be more accurate (because I have more components to discretize the histogram (more precision?) from where the noise will be estimated)? On the other hand, I am wondering how problematic would be to apply a patch size of 5x5x5 to the dataset of 132 vols, since according to my estimations the SNR value is higher than when using the patch of 7x7x7? (I calculated this estimations based on the recommendation I found here).

It is not clear for me how this relationship between the patch size and the number of volumes is established in order to achieve maximal SNR. Could give me a hint on that?

Thank you again for your time!

Hi – there are a few factors to consider. You are right that having more components can improve the precision of the noise level estimation. However, the maximum number of components is always the matrix rank R = min(M, N), where M is the no. DWIs and N is the patch size. So, in your first example, R = min(70, 7x7x7) = 70. In your second example, R = min(132, 5x5x5) = 125 or R = min(132, 7x7x7) = 132. Therefore, for a given dataset, the no. discrete components is maximised as soon as the patch size exceeds the number of volumes.

Further increasing the patch size indeed has a computational penalty. In addition, the larger patch sizes are also more sensitive to spatial variation in the noise level. More rigorous experiments in Lucilio Cordero-Grande’s paper have also shown close-to-square matrices to be optimal.

Finally, I don’t think that maximum SNR is a good criterion for determining the optimal patch size to use, since it would essentially minimise the estimated noise level. So, it would select a conservative choice, but not necessarily the one closest to the actual noise level.

I hope this helps? Happy easter!

1 Like

Hi!

Thanks a lot for your explanation! :slightly_smiling_face: What you said makes sense and it was very clear! I am going to take a look on paper also.

Stay safe!

Hi all!

As a follow-up on this analysis, I recently acquired one dataset that showed holes on top slices after denoising:

Do you have any idea why some of these voxels are being excluded by the algorithm? Is it due to the lower intensity compared to some neighbor voxels?

Thanks a lot!

Hi - I’m not sure what I’m looking at here, but are you sure this is not simply a masking issue? dwidenoise never “excludes” voxels if they are within the mask.

Hi Daan!

I guess you are right. I was inspecting the mask generated after topup correction and I think dwi2mask fails to include some of those voxels. I decided to use fsl’s recommendation to generate this mask instead (bet). It seems to be including all the voxels now.

Thank you!