Stitching/combining two DTI scans

Hi,

I’m trying to combine two DTI scans acquired sequentially from the same participant. The main aim of doing this is to get a larger FOV and perform fibre tracking simply on the combined image. These two scans overlap by ~ 2-3 cm, and are acquired with different orientations and b vectors

Scan 1:
image

Scan 2:
image

I’m having trouble finding commands in MRtrix suitable for this purpose. My current workflow still relies on a combination of MRtrix and Matlab, which typically follows the procedures:

  1. Regrid both DTI scans (denoised) to match the dimension of mDIXON scan using mrgrid scan1.mif scan1_regrid.mif -template mdixon.mif -fill nan (MRtrix)
  2. Add two scan images (stored as 4d matrix) (MATLAB)
  3. Average image data in the region where they overlap (MATLAB)

However, the combined DTI image is either having two regions with different intensities, or only one of the scans is contained in the combined image while the other one is missing. Also, despite the difference between b vectors of two scans are minor, I’m not sure if this is going to affect the results a lot.

I’m wondering if there is a way to perform such scan combination properly, correcting the intensities of two scans and without bringing errors in b vectors along the way?

Thank you so much in advance!

Yilan

1 Like

Welcome Yilan!

The main aim of doing this is to get a larger FOV and perform fibre tracking simply on the combined image.

I love your use of the word “simply” here :stuck_out_tongue: :laughing:
This is a highly non-standard use of DWI data, and so is going to require a substantial amount of tailored coding and/or creativity…


My ideal preferred solution to this sort of problem would be:

  1. Define a target voxel grid, with a number of volumes equal to the sum of the number of volumes in the two input series, and the diffusion gradient table being the concatenation of the two input gradient tables;
  2. Resample each image independently onto that grid, filling with NaN for the half of the target volumes for which that series does not provide any data;
  3. For all subsequent processing steps, use within each voxel only those volumes for which the intensity is finite (i.e. ignore all volumes for which no valid input data are available).

This would however necessitate that all processing steps be robust to the presence of non-finite values in a subset of volumes, which is not a trivial thing to implement or test. So let’s consider instead an approach that wouldn’t involve such.

What you’re looking for, eventually, is a single image with 13 volumes and corresponding diffusion gradient table. Now the first issue, as you’ve noted, is that the gradient tables are not identical, though they are close. You may choose to just swallow that imprecision, maybe generating directions that are the mean of the diffusion directions between the two series to store in the combined image but not actually modify the image intensities in any way. There is an alternative solution, but it may be overkill:

  1. Use input image gradient table to map DWI data to spherical harmonics (amp2sh);
  2. Sample SH function amplitudes along gradient directions defined for output image (sh2amp).

This may not be great in this instance given that you would be limited to lmax=2; though your b-value is only 500.

For estimating and correcting for differences in intensity scaling between the two scans, IMO you can only depend on the region of overlap; you can’t use the rest of the images given they differ in anatomical content. For this I’d suggest mrhistmatch scale, operating on the b=0 data (assuming you have such), masking both images to just the region of overlap.

Once you’ve done this, it’s a matter of determining the appropriate mathematical operations to perform in order to produce your desired data: average the two in the overlapping region, inserting data from one image or the other where one of the two images is filled with NaNs. This could be done in either Matlab or using MRtrix3 commands, but should not be overly difficult in either case. Alternatively you could fill regions outside the FoV with zeroes instead of NaNs, sum the two series, and then multiply that by an image that contains 0.5 in the overlapping region and 1.0 everywhere else. There’s multiple solutions here.

Cheers
Rob