Dealing with Motion in DWI data

Dear Mrtrix Community,

I was looking for some feedback on the best way to deal with motion corruption in DWI data. I am collecting multishell dwi from healthy children and those with developmental disorders so movement can be quite an issue. Currently I am applying topup and EDDY to all datasets using the mrtrix3 wrap around script and then inspecting for any obvious signs of motion but I am not sure this is sufficient.

I know some people manually remove the raw diffusion volumes where there is significant movement before running any preprocessing. Firstly, is this the best approach and secondly, could removing volumes result in unequal sampling across the shells and across the diffusion directions (and potentially a difference in the sampling in the control group (who tend to move less) and the patient group (who move more))? If I am going to remove volumes the follow up question is how many is it appropriate to remove before the dataset shouldn’t be included?

I also know that EDDY performs an outlier detection and replacement strategy but I wondered what people’s experiences were with it and whether this is sufficient?

A follow on question from this is that a previous post on the forum discussed how robust CSD and dwi2tensor were to artifacts, but I was wondering how many outliers there can be in a dataset before the output from CSD and dwi2tensor should be considered with caution?

It would be great to hear what people’s experiences of this are with MRtrix3 and what people find is the best strategy.

Best Wishes,

Alex

Hi Alex

I’m working with neonatal diffusion data, so motion is a problem for me as well.

My approach is to first identify and reject volumes that have significant motion between slices. I have previously used the discontinuity index that was used in this paper ( https://www.ncbi.nlm.nih.gov/pubmed/21600994). I find that this works really well when there is motion on only a few volumes, but the thresholding approach fails when many volumes are affected (i.e. the threshold is elevated, and no motion will be detected at all).

If a dataset has too many volumes rejected, then I remove the entire dataset from analysis. There are a few studies now looking at the effect of rejecting volumes, and biases this might create (e.g. https://www.ncbi.nlm.nih.gov/pubmed/25585018).
When you have groups that are differently affected by motion, this may indeed be a problem. You could randomly remove volumes from your control group to see if it makes a difference to your results.

With the remaining volumes, I then do motion correction and detection of bad slices/voxels. I haven’t tested the new outlier detection with eddy yet, so can’t comment on this. My recommendation would be that you first go through every single slice in every volume for a dataset, and label all the slices that you think should be rejected. Then let eddy do its thing, and compare its outlier report with your labels. That should give you a pretty good indication of how well it performs on your data.

Cheers
Kerstin

Hi Alex,

A few scattered points on top of Kerstin’s comments:

could removing volumes result in unequal sampling across the shells and across the diffusion directions (and potentially a difference in the sampling in the control group (who tend to move less) and the patient group (who move more))?

Absolutely. E.g. This is one reason why acquiring a number of volumes precisely equal to the minimum number for any particular analysis (e.g. 45 volumes to allow an lmax=8 fit) is a bad idea: Reject one volume and the data should probably be discarded. But hopefully with motion, there won’t be any particular pattern with regards to which directions are rejected: rejecting multiple volumes with similar diffusion sensitisation directions (e.g. due to vibration) would be more deleterious than rejecting volumes for which the diffusion sensitisation directions are scattered randomly (which behaves more like an SNR reduction). In MRtrix3-speak, we quantify this as the condition number of the transform from DWI intensities to SH coefficients; I can go into the details of this if there’s interest.

If you’re manipulating these things manually, it would probably be a good idea to track number of rejected volumes / extent of motion and use it as a regressor in any statistical analysis.

If I am going to remove volumes the follow up question is how many is it appropriate to remove before the dataset shouldn’t be included?

Probably depends on the total number of volumes in your data / b-value shell v.s. the minimum number required for your analysis. In the outlier replacement paper Jesper reports up to 10% corruption is OK, which is probably a reasonable rule of thumb. Alternatively, if the reduction in number of volumes results in a reduced achievable lmax, that could also be a hard and fast rule.

I also know that EDDY performs an outlier detection and replacement strategy but I wondered what people’s experiences were with it and whether this is sufficient?

Activating outlier replacement in eddy requires an explicit command-line option (--repol), which is not currently set by the dwipreproc wrapper script. So if you’re using the wrapper script, you’re not currently getting outlier rejection. I need to determine an appropriate mechanism by which to detect whether or not the option is available, so that the script doesn’t crash if run with an older version of eddy. It’s also unclear to me at the moment how version numbering for eddy is handled given that it is now provided separately from the FSL core. If anybody has experience / knowledge of how this is / will be handled I’d like to hear from you.

Cheers
Rob

Thanks for your replies, this is all really helpful!