Reversed phase encoding for all DWIs

I am sorry, if I duplicate topic or I am asking a stupid beginner question I am new here. I need an advice in following question:

I found in MRTrix3 documentary, that in case you’ve got a full PE and rPE (phase encoding, reverse phase encoding) scans, you can use both full datasets as an input for dwipreproc. If I am right in understanding, an algorithm will extract from them just b0 images and than FSL’s topup will make his stuff with using b0’s to estimate field inhomogenities and then FSL’s eddy will apply this estimation to some kind of correction of that susceptibility artefacts. :slight_smile:
If I am right (well, maybe I am not, please let me know), than there is no advantage in collecting full PE rPE datasets and the processing is the same as for dataset with just PE + some b0’s rPE (we are just elongating our measurement).

My questions are:

  1. Am I right? :slight_smile:
  2. If I am, is there any other possibility how to use full dataset?

Thanks for every idea, answer and thanks for your time. I appreciate it.

Hi Michaela,

Sounds like your acquisition corresponds to point number 3 in the documentation page, which is activated either explicitly in dwipreproc* using the -rpe_all option or implicitly by providing all data and using the -rpe_header option. In either case, a key to usage is that the DWIs need to be concatenated together before feeding to dwipreproc, i.e. dwipreproc doesn’t take multiple DWI series at its input.

There are a couple of advantages to this type of acquisition:

  • Having more than one b=0 image with reversed phase encoding for B0 inhomogeneity estimation should be more robust (albeit slower) and mitigates the risk of large subject motion during the acquisition of such leading to an inability to estimate that field.

  • While the DWI signal in a distorted image voxel can be appropriately “relocated” to the undistorted spatial area from which the spins in that distorted voxel originated by simply using an inhomogeneity field estimate, it is not possible to recover the spatial contrast in the region of the undistorted image. Here’s a screendump of a crude slide I made trying to demonstrate this (yes I know, I need to improve the documentation surrounding dwipreproc):

In contrast, if you have not only acquired b=0 images for estimating the inhomogeneity field, but also have acquired DWIs with the opposing phase encoding direction, that then gives the possibility of estimating, from two image volumes with the same diffusion sensitisation but opposing phase encoding directions, the underlying undistorted image that led to the two distorted images. In dwipreproc this is done using the method described in Skare et al. in the command reference.

If you are concerned about acquisition duration prospectively, in my own opinion what will be increasingly used is e.g. splitting the direction scheme in two, acquiring half of the directions with one phase encoding direction, and the other half with the opposing phase encoding direction. This means that in regions of sharp field inhomogeneity gradient, where in one phase encoding direction all of the spins get squished together, you’ll still at least have half of your DWIs with the opposing phase encoding direction where those spins are instead spread out and therefore you have some chance of recovering spatial contrast.


*deprecated command with version 3.0.0 in favour of dwifslpreproc

Dear @rsmith,
thank you so much for your perfect answer. I really appreciate it!