Denoising approach

HI.
Hi. I am a PhD student working on HCP data for a comparison between dMRI at 3T and at 7T. I am especially focusing on denoising approach in order to inspect how much diffusion metrics differences depend on noise modelling. I would like to denoise data according to standard Rician model ( the one implemented in Mrtrix3 software for example) and using autodmri, which is an algorithm able at estimating noise without assuming an a priori model. If I wanted to denoise data according to these 2 different models, in which step of the preprocessing pipeline should I denoise? In other words, with which data should I cope with?
Moreover, different noise estimate models and relative denoising approaches exist (MP-PCA used by Mrtrix3, PIESNO, LANE …) I would like to know which noise model eddy uses and if I need to start from unprocessed data, could I denoise them with different methods and then apply the rest of HCP preprocessing pipeline?

thank you,

Rosella

thank you,

Rosella

Hi Rosella,

Just to clarify: MP-PCA in itself assumes a Gaussian noise distribution. Rician bias correction can be done in a two-pass approach based on the method of moments (see discussion in Veraart et al., NeuroImage 2016), but this is not (yet) implemented in MRtrix3. I had a quick glance at the autodmri preprint but I don’t feel in a position to comment at this point. I suggest you speak to @samuelstjean if you haven’t already.

We always recommend denoising as the first step in the processing pipeline, i.e., immediately after MR reconstruction (see the docs).

As far as I know, eddy uses a Gaussian noise model with uniform standard deviation, which is estimated using cross-validation during its “hyperparameter optimisation” at the start. There is some debate about how prior denoising may affect this hyperparameter optimisation, but like I said above, we nevertheless recommend to denoise first (at least for MP-PCA).

1 Like

Hi.
Thank you for your response. So do you think that if I applied denoising to HCP processed data (which underwent all the preprocessing steps,including eddy outliers’ replacement) this could be a wrong approach?
Thanks
Rosella

Looks like I replied to a no reply email address thinking it would show up here, so ahem, here is what I actually wrote on the subject so it is not lost forever.


Well, I can try to pitch in, but it may seems disjointed as the other half of the discussion is in private emails so far between 3 people and covers about 75% of the questions here.

Anyway, as Daan suggested also, seems like his recommendation mostly follows what we talked about yesterday. In addition, if you wish so, you could apply the bias correction independently on your eddy current, mppca or anything else processed data by itself (it’s an option to my stuff, to just apply bias correction itself) to combine it with your denoising of choice.

I would also argue to apply this step last (well, after denoising with mppca or eddy at least) considering they will do their own internal estimation, and it will likely interfere (severely, as least it did with eddy on the fsl mailing list) with it as we talked about previously.

As for autodmri, it actually assumes a noise model (just a very generic central chi one, valid for any magnitude data so far) and finds voxel fitting that model up to a given probability. Think of it as a generalization of piesno, but that’s getting out of scope a bit now, and that’s why I also recommended to estimate the noise distribution from the unprocessed datasets, but to reuse that information on the processed data afterwards.

I think that should cover most questions, good luck.

Samuel

Ok Thank you. My problem raised from the fact I am almost forced to work with HCP already processed data, since the raw unprocessed ones should require information about gradients which are private. So I was wondering whether applying any kind of denoising on these processed data could cause problems.
best,
Rosella

I would certainly advise against it. The interpolation during motion correction, and certainly the outlier replacement, will distort the noise distribution, invalidating the MP-PCA noise level estimator.

I appreciate the additional challenges, although if any information about the input data were missing this would be something to flag with the HCP consortium.

I thought of denoising processed data using the noise estimate derived from unprocessed data in order to use the original noise profile.
Best,
Rosella

My problem raised from the fact I am almost forced to work with HCP already processed data, since the raw unprocessed ones should require information about gradients which are private.

I appreciate the additional challenges, although if any information about the input data were missing this would be something to flag with the HCP consortium.

The gradient non-linearity information of the original Skyra Connectom scanner can be obtained from Siemens by request. We ourselves committed resources to re-processing HCP DWI data incorporating the latest denoising capability, but had substantial difficulty in replicating the pipeline with adequate precision. I would advise against going down that road unless you are very determined.

Apart from the HCP data specifically, it’d be useful to put some kind of consensus forward at some point wrt denoising (i.e. dwidenoise specifically) being compatible with eddy expectations. I.e. this one:

(I mean, separate from the fact that MP-PCA denoising can’t (shouldn’t) come after motion/distortion correction of course; I think everyone’s on the same page about that for sure.)

The current advice from the different relevant parties is more or less the opposite on this matter.

Don’t get me wrong, I’m not voicing either argument in this debate myself. It’s just that users who are careful about this in general are at times alarmed by the EDDY QC tools output; which appears to go back mostly to the application of dwidenoise (or not). Those looking for advice currently get the 2 opposite statements. :man_shrugging:

Hi. Thank you for your response. May I know, once obtained the gradient non-linearity information, where did you meet issues in replicating HCP pipeline?
thanks,
Rosella

I think I recall it being hard to get the exact same version or outcome from the topup and/or eddy tools. If I remember well, the eddy outcome wasn’t deterministic (i.e. running it multiple times didn’t produce the exact same outcome). But it was hard to figure out what exactly was the step that made the end result not reproduce the available preprocessed data: there’s multiple steps, and only the final output to “validate” against… so not very specific as to what might’ve causes the inability to reproduce exactly.

Back then I was initially hopeful this would be feasible and useful (to try and replicate, and then add steps e.g. denoising), but since then I’ve kind of come to doubt whether it’s worth it. Unless it’s for a very specific purpose, I personally wouldn’t bother and go with the minimally preprocessed data that’s available.

So do you think applying denoising on minimally preprocessed data would be ok?
thanks,
Rosella

No, sorry, I hope you didn’t get that impression. No, what I mean is to use the preprocessed data without any other additional preprocessing. The reasoning being that: to insert the denoising in a reasonable place, it has to be right at the start of the pipeline. So that strictly implies you need to dig in to that, and start from the raw data (and reproduce, with all aforementioned problems and failure of ourselves to do that in practice). So putting all of that together, and making up the balance, I’m personally currently inclined to say it’s not worth it, for those HCP data specifically. I hope that makes sense…? This is just a personal opinion though, i.e., trying to weigh up the effort (plus potential failure, or introducing maybe other issues) against the possible gains.

I guess, for you, it also depends a lot on how much time (and courage) you have to try and pull this off. So it certainly involved a few “personal weights” into the equation as to whether to try and do it, or not.

1 Like

Ok I will take time to decide. Thank you very much!
Rosella

Anyway If I decided to start from unprocessed data, may I aply denoising at least after distorsion correction, as HCP provides LR RL phase encoding?
Thanks,
Rosella

No, I don’t think it’d still be useful to apply denoising after any step that performs spatial interpolation (e.g. such as distortion correction). If it is to have any use, it would always be before that.

So I should apply noise on both the two inverse phase encoding images? (LR and RL)
thanks,

Rosella

Yep, correct; that’s an option.

1 Like