Denoising and unringing before eddy


I have seen multiple times in FSL forum that folks recommended to not use any denoising before applying the eddyas this interferes with the how the eddy works. I wanted to know how important is denoising and unringing in fixel-based analysis?


I haven’t heard about this… Can you post a link to such a recommendation? I’d like to see exactly what the issues are here. I expect this recommendation applies to specific types of denoising that may introduce various biases. As far as I can tell, the MP-PCA method included in dwidenoise does not introduce bias, I don’t think there’s much scope for it to somehow interfere with eddy – there’s certainly not been any reports of this here to date…

I am copying part of a post from Jesper Andersson on FSL forum:

  1. You should not denoise data before running eddy. The artificial nature of denoised data means that the gaussian process that is at the heart of eddy sometimes/often fail to find reasonable hyperparameters for data that has been denoised. Sometimes this results in suboptimal parameters that means that the motion/distortion correction is suboptimal. Sometimes it results in an infinite loop in the search for that hyperparameters, which I would guess is what has happened here.
    I don’t know the details about the denoising you use, but I would also not be surprised if the denoising works suboptimally when there are motion and dropouts in your data. Which would be another reason to do it after eddy, if at all.


Yes, that sounds reasonable if the denoising technique used introduces artificial structure that wasn’t there before. This more or less matches what I suspected above. There are plenty of denoising methods out there that produce clean-looking DW images, but destroy or introduce unexpected angular structure. I’ve not tested any of them personally, but I’d expect this would apply to techniques like non-local means or total variation (which I think Jelle Veraart showed in his MP-PCA paper).

Given what Jesper says here, it sounds like when it’s a problem, it’s an outright failure. So I’d recommend keeping an eye on it if you’re worried, but given that we’ve never had a report of a problem with our denoising so far, I don’t expect it’s a problem…

1 Like

Hi Mahmoud,

Just adding to what Donald said, I want to point out that MP-PCA denoising paper specifically recommends using denoising at the first step of the pipeline (link). A good explanation is given in this paper from the same group:

It is important that the denoising step is the first stage of the pipeline as it relies on noise being uncorrelated both spatially and among successive acquisitions (in the dMRI case, in q-space). Performing this step after processing steps that use interpolation to reconstruct images would result in correlated noise and failure of the basic assumptions underlying the random matrix theory-based approach to PCA denoising.

Unless you are encountering unexpected issues, I would thus recommend sticking to the recommendation of the paper.

W.r.t. specific points in Jesper’s comment:

  • As MP-PCA will only suppress white, uncorrelated noise, the output should never be of “artificial nature” (you can check this in the denoising residuals). I see no theoretical reason why this should be incompatible with the Gaussian process regression in eddy, but is is of course possible that eddy makes certain default assumptions about the expected SNR.
  • The point about motion effects on denoising is true, any loss of redundancy in the input data by motion and slice dropouts would result in less denoised data. However, first correcting motion introduces noise correlations that will also reduce the capacity to denoise, as explained in the quote.


1 Like

Thanks for your response, Daan.
In our dataset, each subject has dMRI data comprised of two shells acquired in opposite PE in the same session.
Should I do denoising and Gibbs ringing correction for each PE separately or I can concatenate them and then do denoising and Gibbs ringing correction?

This is a tricky issue, and discussed in this thread – personally I’d concatenate and see what happens, but inspect the results very closely, particularly the residuals of dwidenoise.

Hi all,

I have run denoising and gibbs ringing removal before eddy, but I think this has caused problems for some of my subjects.

Eddy completed running with no errors, and produced output that looked OK to me on first visual inspection. I noticed problems when I used the eddyqc tools. I ran eddy with the options --cnr_maps and --residuals to generate extra qc metrics. For some subjects, the CNR values were NaN or extremely high, and the residuals for the diffusion-weighted volumes were 0. This is not the case when I run eddy on the raw DWIs. I posted to the FSL mailing list about this and they thought denoising was causing eddy to fail for these subjects- basically that denoising made it as if there was no uncertainty in the data and no need for correction.

It would be great to get people’s thoughts. It seems like I’ll have to run eddy on my raw data. Should I run denoising after eddy, or not at all? What effect might not running denoising have on a fixel-based analysis? I’m also not sure if I should run gibbs ringing removal on my data. I thought it might help if someone looks at the data for one of my problematic subjects- is that possible?



Hi @Claire,

Just to clarify, since you mention both denoising as well as gibbs ringing removal: what happens if you don’t run either the denoising, or the gibbs ringing removal? I.e., can you easily figure out which one of these steps causes the subsequent problems you’re describing? I’m asking because…

…this sounds to me as if you experience the issues when both denoising and gibbs ringing removal had been run, but not if both had not been run. So it might be useful to know what happens in the scenario just one of both (either denoising, or gibbs ringing removal) is run.

Apart from the issues, which would preferably be figured out properly and solved of course, about this though:

In my personal experience, denoising has never been crucial to the result of fixel based analyses that I have run or observed. There might be very particular scenarios where it may have a more substantial effect on the outcome of the FBA as a whole, but that would then probably be a scenario where you’re very close to e.g. your p-value threshold or similar…

Personally, I do value gibbs ringing removal a bit more (“relatively speaking”), since there’s scenarios where that can cause outright biases, even false positives, or other issues with interpretation.

All that said, I should again emphasise that it’d of course be better to figure out the true cause of the issue, so you can simply run both denoising and gibbs ringing removal without worries.