I have seen multiple times in FSL forum that folks recommended to not use any denoising before applying the eddyas this interferes with the how the eddy works. I wanted to know how important is denoising and unringing in fixel-based analysis?
I havenāt heard about thisā¦ Can you post a link to such a recommendation? Iād like to see exactly what the issues are here. I expect this recommendation applies to specific types of denoising that may introduce various biases. As far as I can tell, the MP-PCA method included in dwidenoise does not introduce bias, I donāt think thereās much scope for it to somehow interfere with eddy ā thereās certainly not been any reports of this here to dateā¦
I am copying part of a post from Jesper Andersson on FSL forum:
You should not denoise data before running eddy. The artificial nature of denoised data means that the gaussian process that is at the heart of eddy sometimes/often fail to find reasonable hyperparameters for data that has been denoised. Sometimes this results in suboptimal parameters that means that the motion/distortion correction is suboptimal. Sometimes it results in an infinite loop in the search for that hyperparameters, which I would guess is what has happened here.
I donāt know the details about the denoising you use, but I would also not be surprised if the denoising works suboptimally when there are motion and dropouts in your data. Which would be another reason to do it after eddy, if at all.
Yes, that sounds reasonable if the denoising technique used introduces artificial structure that wasnāt there before. This more or less matches what I suspected above. There are plenty of denoising methods out there that produce clean-looking DW images, but destroy or introduce unexpected angular structure. Iāve not tested any of them personally, but Iād expect this would apply to techniques like non-local means or total variation (which I think Jelle Veraart showed in his MP-PCA paper).
Given what Jesper says here, it sounds like when itās a problem, itās an outright failure. So Iād recommend keeping an eye on it if youāre worried, but given that weāve never had a report of a problem with our denoising so far, I donāt expect itās a problemā¦
Just adding to what Donald said, I want to point out that MP-PCA denoising paper specifically recommends using denoising at the first step of the pipeline (link). A good explanation is given in this paper from the same group:
It is important that the denoising step is the first stage of the pipeline as it relies on noise being uncorrelated both spatially and among successive acquisitions (in the dMRI case, in q-space). Performing this step after processing steps that use interpolation to reconstruct images would result in correlated noise and failure of the basic assumptions underlying the random matrix theory-based approach to PCA denoising.
Unless you are encountering unexpected issues, I would thus recommend sticking to the recommendation of the paper.
W.r.t. specific points in Jesperās comment:
As MP-PCA will only suppress white, uncorrelated noise, the output should never be of āartificial natureā (you can check this in the denoising residuals). I see no theoretical reason why this should be incompatible with the Gaussian process regression in eddy, but is is of course possible that eddy makes certain default assumptions about the expected SNR.
The point about motion effects on denoising is true, any loss of redundancy in the input data by motion and slice dropouts would result in less denoised data. However, first correcting motion introduces noise correlations that will also reduce the capacity to denoise, as explained in the quote.
Thanks for your response, Daan.
In our dataset, each subject has dMRI data comprised of two shells acquired in opposite PE in the same session.
Should I do denoising and Gibbs ringing correction for each PE separately or I can concatenate them and then do denoising and Gibbs ringing correction?
This is a tricky issue, and discussed in this thread ā personally Iād concatenate and see what happens, but inspect the results very closely, particularly the residuals of dwidenoise.
I have run denoising and gibbs ringing removal before eddy, but I think this has caused problems for some of my subjects.
Eddy completed running with no errors, and produced output that looked OK to me on first visual inspection. I noticed problems when I used the eddyqc tools. I ran eddy with the options --cnr_maps and --residuals to generate extra qc metrics. For some subjects, the CNR values were NaN or extremely high, and the residuals for the diffusion-weighted volumes were 0. This is not the case when I run eddy on the raw DWIs. I posted to the FSL mailing list about this and they thought denoising was causing eddy to fail for these subjects- basically that denoising made it as if there was no uncertainty in the data and no need for correction.
It would be great to get peopleās thoughts. It seems like Iāll have to run eddy on my raw data. Should I run denoising after eddy, or not at all? What effect might not running denoising have on a fixel-based analysis? Iām also not sure if I should run gibbs ringing removal on my data. I thought it might help if someone looks at the data for one of my problematic subjects- is that possible?
Just to clarify, since you mention both denoising as well as gibbs ringing removal: what happens if you donāt run either the denoising, or the gibbs ringing removal? I.e., can you easily figure out which one of these steps causes the subsequent problems youāre describing? Iām asking becauseā¦
ā¦this sounds to me as if you experience the issues when both denoising and gibbs ringing removal had been run, but not if both had not been run. So it might be useful to know what happens in the scenario just one of both (either denoising, or gibbs ringing removal) is run.
Apart from the issues, which would preferably be figured out properly and solved of course, about this though:
In my personal experience, denoising has never been crucial to the result of fixel based analyses that I have run or observed. There might be very particular scenarios where it may have a more substantial effect on the outcome of the FBA as a whole, but that would then probably be a scenario where youāre very close to e.g. your p-value threshold or similarā¦
Personally, I do value gibbs ringing removal a bit more (ārelatively speakingā), since thereās scenarios where that can cause outright biases, even false positives, or other issues with interpretation.
All that said, I should again emphasise that itād of course be better to figure out the true cause of the issue, so you can simply run both denoising and gibbs ringing removal without worries.
Hi there!
Not sure, whether case was closed somehow or not . I ran into the same problem so I am gonna write it down here if somebody will need it.
I preprocessed 140 subjects, 130 went OK, 6 of them failed during preproc, 4 had really wierd results (a.k.a. NaNs or something like 23392738.00)
I re-runned same script
(1) without denoise & with gibbs unringing
(2) with denoise & without gibbs unringing
Resuls for one random subject: denoise & unringing
Average SNR (b=0 s/mm2) 50.11
Average CNR (b=700 s/mm2) 3.69
Average CNR (b=1000 s/mm2) 4.04
Average CNR (b=2300 s/mm2) 23392738.00
only denoise
Average SNR (b=0 s/mm2) 47.42
Average CNR (b=700 s/mm2) 3.36
Average CNR (b=1000 s/mm2) 4.20
Average CNR (b=2300 s/mm2) 20990916.00
only degibbs
Average SNR (b=0 s/mm2) 35.58
Average CNR (b=700 s/mm2) 1.59
Average CNR (b=1000 s/mm2) 1.60
Average CNR (b=2300 s/mm2) 1.64
Imho, it is denoising step that is doing problems on my data. Also SNR looks better. I am looking forward to confirmation if anybody faced same problem.
I should probably add to the above exchange that I currently work with Claire. We have investigated this issue more closely, and realised denoising is causing relevant problems with motion & distortion correction. Since we are working with kids data, where motion is a critical factor we have to correct for as good as possible, we are currently not including denoising for some of our cohortsā processing. The impact of noise (and thus potentially denoising) on FOD estimation and our relevant applications is extremely minor, hence when weighing things up, the motion correction wins out for us in terms of critical value. We didnāt run into the dwifslpreproc error your report though, so that might indicate another issue.
Hi Thijs,
thanks for the update. I was playing a little bit with dwifslpreproc settings but nothing helps so far, so I am applying similar approach for deciding between denoise and eddy. Please let me know if you will crack this nut.
Little correction to previous post, of course, SNR looks worse not better missclick whops.
Have you come up with something new @Thijs? Iāve tried upgrading to the new fsl version (6.0.5), just in caseā¦ but no progress. Since I need to apply eddy because of motion artefacts and I really donāt wanna skip denoising for my FBA analysis. 130 of 140 subjects are OK, but I dont wanna exclude this 10 subjects obviously, cause some of they are patients.
I ran into 2 issues (either calculation fell down or it went OK but CNR values are totally strange)
I am posting tmp folder for one subject for each issue on GoogleDrive
*Q7 - calculation interrupted
*O8 - CNR issues
Iāve just had a look through your tmp folders, see what I could find.
For the CNR issue, Iām not sure what the problem is: the CNR values are obviously nonsense for the two higher shells, but the results look perfectly sensible. The CNR maps shown in the QC report also look completely clean, and if anything close to zero (this may be whatās causing the nonsense values?) ā not the very large values shown in the table at the head of the report. I confess Iām not sure what the relationship is between the CNR values and the CNR maps, so maybe Iām off the mark on that oneā¦ The outlier report shows not a single slice was detected as an outlier, and the resulting DW images indeed look pretty clean to me. All in all, I donāt see a problem with these data other than the strange CNR values reported by the eddy QCā¦
For the other dataset, it looks like the problem is that the eddy_indices.txt file is empty, when it should contain a list of 100 ones. Can you try the following:
Yep, the CNR totally does not make sense - images looks ok.
About the eddy_indices.txt, I checked it and it is not empty in none of the posted datasets. In both cases it contains 100 ones - I tried to download the posted datasets again too.
Ok, not sure why that file was empty when I downloaded it on my systemā¦
So thatās not the issue. What happened when you tried to run eddy directly, then? Hopefully there was some error message to give us a hint as to the failure?
Ok,
I ran it directly and Iāve got following error:
EDDY::: DoVolumeToVolumeRegistration: Unable to find volume with no outliers in shell 1 with b-value=1000 EDDY::: Eddy failed with message EDDY::: eddy.cpp::: EDDY::ReplacementManager* EDDY::DoVolumeToVolumeRegistration(const EDDY::EddyCommandLineOptions&, EDDY::ECScanManager&):
I found similar error on JISCMAIL where a āhiddenā option --ol_ec=2 was suggested by Jesper Andersson for slice-to-volume correction. After googling I found that it helped in a few cases with NaN and extreme values of CNR too (same origin of problems). But not in my case.
In an email from 21 Jun 2021, Jespen Andersson metioned
this is a know problem. It occurs when eddy tries to figure out if there are any volumes truly unaffected by intra-volume movement (to use a shape reference). One of the criteria for this is that there mustnāt be any signal dropout that can be attributed to subject movement. This sometimes becomes a problem, especially for shells with low b-values. The low b-value shall have an inherently higher sensitivity for detecting even very small signal drop outs. In a future release there will be an option to manually specify one such volume for each shell, which will bypass the automatic tests inside eddy. But unfortunately I didnāt manage to include that in the upcoming release. But hopefully there will be more frequent eddy releases going forward, so that it will happen in a not too far away future.
So maybe FSL 6.0.6 ? If I run eddy without --mporder option (slice-to-volume correction), It is OK.
OK, so thatās what others have observed too. Can I ask you to post the full output of eddy_cuda? Iād like to see at exactly what stage of execution the issue happens.
Another thing to try might be to increase the tolerance for outliers using the --ol_nstd option, e.g. --ol_nstd 5 or even higher (the default is 4). From the FSL docs for that option (emphasis mine):
This parameter determines how many standard deviations away a slice need to be in order to be considered an outlier. The default value of 4 is a good compromise between type 1 and 2 errors for a āstandardā data set of 50-100 directions. Our tests also indicate that the parameter is not terribly critical and that any value between 3 and 5 is good for such data. For data of very high quality, such as for example HCP data with 576 dwi volumes, one can use a higher value (for example 5). Conversely, for data with few directions a lower threshold can be used.
Itās possible that denoised data qualify as āvery high qualityāā¦ But more to the point: if this reduces the sensitivity of the outlier detection back down to a reasonable level (and I am assuming it is being overly sensitive in these cases ā I may be wrong), this may allow at least one volume to be identified as having no outlier slices, which could then be used as the reference and get over this error.
Ok,I am posting the output of eddy_cuda command when I ran it directly in a tmp file.
eddy_cuda --imain=eddy_in.nii --mask=eddy_mask.nii --acqp=eddy_config.txt --index=eddy_indices.txt --bvecs=bvecs --bvals=bvals --topup=field --slm=linear --repol --mporder=16 --cnr_maps --verbose --slspec=slspec.txt --out=dwi_post_eddy --verbose
Reading images
Performing volume-to-volume registration
Running Register
...................Allocated GPU # 0...................
Loading prediction maker
Evaluating prediction maker model
Calculating parameter updates
Iter: 0, Total mss = 25.4058
Loading prediction maker
Evaluating prediction maker model
Calculating parameter updates
Iter: 1, Total mss = 8.99788
Loading prediction maker
Evaluating prediction maker model
Calculating parameter updates
Iter: 2, Total mss = 8.39175
Loading prediction maker
Evaluating prediction maker model
Calculating parameter updates
Iter: 3, Total mss = 8.36004
Loading prediction maker
Evaluating prediction maker model
Calculating parameter updates
Iter: 4, Total mss = 8.34272
Setting scan 94 as b0 shape-reference.
Running sm.ApplyB0LocationReference
Running sm.PolateB0MovPar
Running Register
Loading prediction maker
Evaluating prediction maker model
Checking for outliers
Calculating parameter updates
Iter: 0, Total mss = 1.60248
Loading prediction maker
Evaluating prediction maker model
Checking for outliers
Loading prediction maker
Evaluating prediction maker model
Calculating parameter updates
Iter: 1, Total mss = 1.64412
Loading prediction maker
Evaluating prediction maker model
Checking for outliers
Loading prediction maker
Evaluating prediction maker model
Calculating parameter updates
Iter: 2, Total mss = 1.7028
Loading prediction maker
Evaluating prediction maker model
Checking for outliers
Loading prediction maker
Evaluating prediction maker model
Calculating parameter updates
Iter: 3, Total mss = 1.60064
Loading prediction maker
Evaluating prediction maker model
Checking for outliers
Loading prediction maker
Evaluating prediction maker model
Calculating parameter updates
Iter: 4, Total mss = 1.5794
Setting scan 10 as shell shape-reference for shell 0 with b-value= 700
EDDY::: DoVolumeToVolumeRegistration: Unable to find volume with no outliers in shell 1 with b-value=10ļæ½
EDDY::: Eddy failed with message EDDY::: eddy.cpp::: EDDY::ReplacementManager* EDDY::DoVolumeToVolumeRegistration(const EDDY::EddyCommandLineOptions&, EDDY::ECScanManager&): Exception thrown
I have exactly 100 directions, so I used --ol_nstd=5 and it solved the issue - Calculation has finished successfully with meaningful CNR, SNR, % of Outliers etc.
Thanks a lot!! I never thought a āvery high quality of dataā could cause an error in calculationsā¦