T1-like contrast from DWI data

Hi MRtrixperts,

It is that time again; where I reluctantly crawl out of my cave of ‘lurking around the web in hopes of finding answers to previously asked questions’.

I’m attempting to preprocess DWI data in accordance with this method as we have not acquired a blipup/blipdown pair for topup. In order to non-linearly register the DWI to T1-space, I need to invert one of the images’ contrast so that they match each other.

Here’s where the plot thickens. I saw @rsmith mentioned in a thread from 2017 that a method, not available at the time, would one day solve this problem. I know progress on ssmt has been made (?) but the last piece of the puzzle eludes me:

finding weights to reproduce the distribution of intensities in the T1-weighted image

Is this possible to do within the MRtrix3 suite? Am i barking up the wrong tree with this one?

Alternatively, I know that mrhistmatch has had nonlinear functionality added to it to be able to handle situations like this. Unfortunately though I can’t seem to get it to work. The form of command I am using is shown below:
mrhistmatch nonlinear t1_brain.nii.gz b0_avg_brain.nii.gz t1_inv_non-linear.nii.gz -mask_input t1_mask.nii.gz -mask_target b0_avg_brain_mask.nii.gz

Unfortunately the results aren’t as intended:

T1-Weighted Image


B0 DWI Image

T1 “DWI” Weighted Image

As you can see, the contrast inversion has not quite worked. I have also tried switching target and input images around, no luck. Any help on this topic would be greatly appreciated.

Many Thanks

Matt

Hi Matt,

Perhaps you could obtain an image with similar contrast to a T1-weighted image, using the sphrical harmonics decomposition of the diffusion signal:

Anisotropic Power Maps: A diffusion contrast to reveal low anisotropy tissues from HARDI data

It might be good enough for registration purposes. What do you think?
Best,
Roey

No, I’m not surprised, I’ve seen exactly that before. There is no good one-to-one mapping in that case. Sometimes it might get a bit closer (as probably did in that paper), but most of the time this is off way to much to be useful. We noticed this in our lab as well, when we desperately needed a solution for this work: https://academic.oup.com/brain/article/141/3/888/4788771 , and even more so the follow up work (where lesions needed to align accurately between T1, FLAIR and dMRI data): https://www.biorxiv.org/content/10.1101/623124v1.full .

We ended up with a fully working solution that’s based on this mechanism to produce a T1-like contrast (this will greatly add to your web-lurking attempt to find answers :wink: ): https://www.researchgate.net/publication/307862882_Generating_a_T1-like_contrast_using_3-tissue_constrained_spherical_deconvolution_results_from_single-shell_or_multi-shell_diffusion_MR_data .

Internally, Rami and myself developed an integrated solution to estimate, and re-estimate, this T1-like contrast during iterations of registration (with outlier rejection, etc…). This worked perfectly well with our data, even with large volumes of white matter lesions (their contrast was estimated correctly too).

However! (buzz-kill follows)

For this to work well with the current registration in MRtrix (which we used at the time), key being that this is currently limited to squared-difference based registration that aims for exactly matching intensities, you’ll need decent bias field correction for the T1w image. This is by the way also partially what causes issues for the other strategy in the results you show; but it’s more complicated there.

Therefore, it’s not robust at all times; I’ve seen it work on some datasets, and perform sub-optimally on others. It’s not always easy to inspect the result, so it’s hard for the average user to identify when it works sub-optimally. So I find it a bit too risky at the moment to just put out there as an integrated solution; people might get inaccurate results and not even realise it.

What I recommend if you want to give something like this a shot in any case is roughly this:

  1. Make sure you run some form of bias field correction on the T1w image (ANTs’ N4 algorithm is good for this)
  2. Run a form of 3-tissue CSD on your dMRI data. You can go with MSMT-CSD if you’ve got multi-shell data. SS3T-CSD should do the job for single-shell data most of the time. If you need SS3T-CSD, it’s available at https://3Tissue.github.io .
  3. Once you’ve got a bias field corrected T1w image and 3-tissue maps, make sure they’re at least already reasonably rigidly aligned; so there’s a lot of (correct / accurate) overlap between all relevant tissues (WM - GM - CSF). You can use e.g. FSL’s FLIRT with a normalised mutual information metric for this.
  4. Extract the WM map from the WM FOD volume (mrconvert wmfod.mif wm.mif -coord 3 0). The GM and CSF from 3-tissue CSD techniques are already “just” maps out of the box.

From here on, follow essentially the steps (as shown in the images) in https://www.researchgate.net/publication/307862882_Generating_a_T1-like_contrast_using_3-tissue_constrained_spherical_deconvolution_results_from_single-shell_or_multi-shell_diffusion_MR_data (A-B-C-D-E below match with the image in the abstract):

A. Bias field corrected T1w image; you got this from the steps above.

B. Resample the aligned T1w image to the grid of your diffusion data / 3-tissue CSD result. This might have already been done “automatically” for you if you used e.g. FSL tools to register. Then apply the brain mask you used for your dMRI data. Everything now “lives” on the same grid.

C. 3-tissue CSD result, you got this now as individual tissue maps (GM and CSD directly from 3-tissue CSD, WM from the command above extracted from WM FOD).

D. Normalise to sum to one for each tissue map / compartment. You effectively get tissue signal fractions then, a useful piece of information for many subsequent analyses. You can e.g. do this per tissue type via mrcalc (assuming wm.mif is only a map, see above):

mrcalc mask.mif wm.mif wm.mif gm.mif csf.mif -add -add -divide 0 -if frac_wm.mif
mrcalc mask.mif gm.mif wm.mif gm.mif csf.mif -add -add -divide 0 -if frac_gm.mif
mrcalc mask.mif csf.mif wm.mif gm.mif csf.mif -add -add -divide 0 -if frac_csf.mif

E. Fit the T1w intensities (on the dMRI grid, see above) as a linear combination of these 3 tissue signal fraction maps. The easiest way to do this in any external software (e.g. MATLAB, R, …) is to use mrdump with -mask mask.mif on each of the frac_....mif images above separately (so run it 3 times), and then also on the regridded T1w image. This will give you 3+1 text files, will all intensities in all those images stored in the same matching order. Import this e.g. in MATLAB, and run a least squares estimation with 3 unknowns and as many equations as there are voxels in the mask (i.e. a massive number typically). Do check all input text files first for non-physical or non-finite values such as NaN or Inf (various steps might introduce these). If they exist, remove the entire equation from the system, i.e. the corresponding entries for each of the 4 text files / vectors at this point.

Finally, once you got the 3 weights, use mrcalc to multiply each with the fraction image and sum, as follows:

mrcalc frac_wm.mif WM-WEIGHT-HERE -mult frac_gm.mif GM-WEIGHT-HERE -mult frac_csf.mif CSF-WEIGHT-HERE -mult -add simulated_T1w_contrast.mif

The final image should resemble your bias field corrected / regridded T1w image closely, similar to figures B and E in the abstract.

As mentioned, even when it closely resembles this successfully, I think I would still not recommend sum or squared differences guided registration: if the intensities are even ever so slightly off, registration can very easily be misguided. At this stage, I’d use the resulting contrast e.g. with FSL FLIRT and in this case normalised cross-correlation as the guiding metric. Set it as such that you’re only after a very, very smooth warp (even when these distortions are large, they’re still relatively smooth in space). Get the warp from FLIRT, get it into MRtrix, apply to the WM FOD image and GM and CSF compartments if you need them.

That explanation got longer than I intended to… :grimacing: :grin:

It looks more complicated than it is in practice, but you’ll need some confidence chaining a few tools together, as you notice. I wrote that abstract and generated the results all at once in less than a day; if that can aid confidence (and hopefully not crush it :blush:).

Cheers,
Thijs

I’m attempting to preprocess DWI data in accordance with this method as we have not acquired a blipup/blipdown pair for topup. In order to non-linearly register the DWI to T1-space, I need to invert one of the images’ contrast so that they match each other.

I was only recently having another look at that method myself. Within their implementation in BrainSuite, that contrast matching should be occurring within their processing script, and shouldn’t have to be performed explicitly. Personally I wasn’t able to get results I was happy with, but would be interested to hear from anyone who has managed to get it working well.

I do nevertheless use the same concept myself for doing inter-modal rigid-body registration following DWI pre-processing. What you’re missing out on your attempt is the fact that in order for the gross intensities of the different tissues to be approximately monotonic between the two images, you need to first invert one of the images prior to the non-linear histogram matching; this is just a mrcalc 1.0 image.mif -div call. This will also depend on good T1 bias field correction just as for the multi-tissue approach.