I have a dataset consisting of dwi data, with two runs (identical parameters, inc PE direction) per participant. The data was collected on a Phillips scanner, and I have only PAR and REC files. I am ultimately wanting to calculate FA, MD, RD, AD metrics, as well as construct tractograms for a between groups analysis, but am currently wondering how best to go about preprocessing and averaging the runs. My plan is to:
Convert PAR files to nifti
Concatenate the runs (using fslmerge)
Run mrconvert, dwidenoise, mrdegibbs
motion correct with dwipreproc (there are no fieldmaps/top-ups)
Complete B1 bias field correction
Average runs prior to dtifit and tractography steps
My questions are whether this procedure seems reasonable, or is likely to introduce bias at any estimation stage, and more specifically, should I be splitting the runs out after motion correction, before the B1 bias field step?
Firstly, I’m assuming that your only interest in utilising DWI data from the two runs is to improve signal quality, and you are not going to be interrogating differences between the two runs.
From there, the principal question is what the participants may or may not have done in between the two scans.
At one extreme, if the two runs were acquired immediately after one another, then the vast majority of image attributes (described in more detail below) should be equivalent between them. However one possible issue may be that during the recalibration of the scanner between the two acquisitions, the global signal magnitude may be modulated between the two series. The dwicat script is intended to correct for such modulation; while I can’t advocate its use globally, I would suggest at least running it on your data to see if there is fluctuations in overall signal magnitude between runs.
Conversely, consider if the only consistent feature between the two runs is the identity of the participant. They could have been acquired on different dates, using different head coils, with different positioning of the participant’s head. In that case, there’s a lot of attributes that may vary considerably between the two runs, and therefore premature concatenation of the data may cause issues for subsequent processing steps.
So let’s break down the steps:
dwidenoise assumes a 3D distribution of noise level; for a given voxel, for the DWI intensities within that voxel and its neighbours, the noise level is constant. If the noise level at one location in the image varies between the two runs, then that assumption breaks. Anything that influences the raw signal magnitude differentially between the two runs could be a problem here. So denoising separately may be safest.
mrdegibbs operates on each volume independently, so whether or not the runs are concatenated or run separately would have no influence.
For motion / eddy current correction, the principal issue is that if the two runs are executed separately, they will not be aligned spatially with one another. Correcting this afterwards would invoke a second image interpolation step to get all DWI data onto a common image voxel grid, which would introduce additional blurring. So if this step is run once on concatenated data there’s no problem, but if it’s run twice, I would suggest that for the second run it would be beneficial to include the first b=0 volume of the first run as the first volume of the second run. This will ensure that FSL’s eddy is using the same image as the reference point toward which all image geometric corrections are converged.
If the patient position has changed substantially, the B1 bias field may be different between the two runs, though it’s generally a pretty smooth field and so the position would need to change quite a lot between runs for it to be a problem. But if the runs are on different days, or involve different head coils, then estimation and correction of this field is probably best done separately for the two runs.
I would additionally advocate concatenating rather than averaging runs, for a couple of reasons. If the patient position has changed at all between the two runs, then even if the direction of diffusion sensitisation relative to the scanner is consistent, the direction of diffusion sensitisation relative to the biology is not consistent; so directly averaging those data would lead to blurring in the angular domain. Averaging volumes also alters the statistical distribution of the data, which may not be ideal if the algorithm for fitting the diffusion model assumes a particular distribution.
It is a fundamental assumption of the operation of the algorithm underlying the dwidenoise command that it does not vary between volumes (importantly, this applies specifically to “noise level”, and not SNR, which may be considered to vary at a given location between DWI volumes due to differences in the signal level imposed by diffusion sensitisation contrast). Given that between volumes of the same sequence there is very little that can change in the gamut of hardware & software responsible for producing those images, that should be a safe assumption.