Global intensity normalisation and white matter hyperintensities

Dear experts,

I had a question regarding the global intensity normalisation correction for dwi images. Initially, I was following this MRtrix doc manual: https://mrtrix.readthedocs.io/en/0.3.16/workflows/DWI_preprocessing_for_quantitative_analysis.html, in which it notes to the users that if your sample is affected by white matter disease, then it may be more appropriate to normalise using the median b=0 CSF.

I am working with data from a cohort of older adults who are either healthy, have mild cognitive impairments, or have Alzheimer’s disease, and so white matter hyperintensities would be present at varying levels between and within these groups.

Now, I have moved on to using multi-tissue CSD and FBA analysis with this MRtrix doc manual: https://mrtrix.readthedocs.io/en/3.0.1/fixel_based_analysis/mt_fibre_density_cross-section.html, in which it uses mtnormalise to conduct the global normalisation correction. My question is whether I still need to still account for the white matter hyperintensities somehow with this correction, and if so, how could I do this?

In the previous steps for the diffusion analysis, for the response function (dwi2response), I have included the pathological tissue with the map of white matter lesions as the 5th tissue in the 5ttgen algorithm. I have also read the Mito et al., 2018 paper, in which they seem to derive factors from each tissue (white matter, grey matter, and CSF) for intensity normalisation.

Thanks,
-Lenore

Hi Lenore,

Hope you’re doing well!

I can give you some quick answers, but don’t hesitate if you need more elaborate explanations. I’m very short on time, so I’ll focus on the essence so you can proceed processing:

If you can perform 3-tissue CSD, either using multi-shell data with MSMT-CSD or single-shell data with SS3T-CSD, then mtnormalise will take care of all relevant things for you in terms of intensity normalisation. We’ve used this successfully in both dementias as well as stroke subjects without issue. A few particular extreme effects are a slight exception to the mtnormalise assumptions; I’m aware of a few specific ones, but a good example you might be seeing in your subjects would be the occasional occurrence of calcification of (parts of) the choroid plexus. You might see this show up sometimes as “super WM/GM-like tissue”. Don’t worry about those as well, as mtnormalise correctly picks this up as outliers and inherently ignores this in the process of intensity normalisation. Recent works (apart form the Mito et al. studies), where we used this successfully in dementia here and stroke here and here. Follow up from the last on in that list is going through revision, but an OHBM poster is already available here. Others are in the pipeline. As mentioned, feel free to ask further if helpful.

Furthermore, the safest approach to get your response functions in presence of lesions is dwi2response dhollander. Due to how it works and due to the nature of most WMHs, those are also naturally ignored at some point within the algorithm. In that case, you don’t risk to be biased by manual segmentations or whichever source provides your segmentations. We know lesions can otherwise be segmented as “GM” by some strategies, which is not desirable in response function estimation if you don’t manually remove them. But dwi2response dhollander is robust against this out of the box. This was already the case back here, but recent improvements have made the WM response estimation in particular even more robust in case lesions would accidentally happen in some of our favourite single-fibre WM regions. :wink:

Finally, with respect to:

Yes, that was quite a unique one: this effectively used the voxels from dwi2response dhollander for global intensity normalisation :upside_down_face: . This was the precursor to what became itself a precursor to the current mtnormalise actually, and there is some level of shared logic even in this approach. I wouldn’t try and replicate that anymore at this stage though: I wholly recommend mtnormalise for this purpose. Note mtnormalise also corrects for bias fields, which back then we had to do separately with the less optimal dwibiascorrect approach.

Hope that helps and provides enough detail! Otherwise, just let me know. :slightly_smiling_face: :+1:

Cheers & take care,
Thijs

1 Like

Dear Dr. Dhollander,

Thank you so much for your detailed response. That is a relief about the mtnormalise function, as one of my advisors was asking about this to me, so now I can tell them :slightly_smiling_face:

Ah, it makes sense to use the dwi2response dhollander instead of the dwi2response msmt_5tt to estimate the response functions for my data. For some reason, I had thought that the dwi2response msmt_5tt was ‘better’ and more accurate in the sense that it utlilised more mri data, but I was wrong about that! :slightly_smiling_face: :upside_down_face:

Following the mrtrix manual, I would then use dwi2fod msmt_csd since I have multiple shells (b0, b1000, b2000). Since I do have other mri data (e.g. t1w, t2 FLAIR), I was wondering if I could still use these in my analysis? Such as, would it be appropriate to still create the 5 Tissue Type Image and use it as a tissue constrain in the tractography generation (i.e. Anatomically-Constrained Tractography (ACT) in tckgen -act image)? One of the reasons that I did not do this initially was that I read in another mrtrix community forum post here that the combination of multi-tissue CSD and ACT may not be appropriate.

Thanks,
-Lenore

1 Like

Yep, all good!

Yes, the way you can think about this “in general”: using more data is good if it’s used at least as accurately as the rest of the data. Otherwise, relying on more data also means relying on all assumptions made about those extra data, and just as well suffering the potential inaccuracies because of it. This is such a case: the dMRI data itself has very good information on the features we want those response functions to have. The T1w images has problems in this regard (e.g. lesions being segmented as GM), and brings extra issues (alignment to the dMRI data, both motion & differences in distortion). So instead of enabling us, it’s holding us back.

All good. In some recent works, we’ve opted to only use the highest b-value + b=0 data (with SS3T-CSD) to be more specifically sensitised to the intra-axonal compartment of WM. In any case, either will work “reasonably” in practice.

Not needed for an FBA at least: they serve no purpose there. Just 3-tissue CSD + the typical FBA analysis will inherently give you appropriate WM FODs to analyse, both in normal appearing WM as well as within lesions themselves.

Only if you need ACT, i.e. in connectomics. Again, in FBA it’s not relevant. But in terms of you bigger question here maybe: if you want to look into connectomics and rely on ACT, yes, you need a way to deal with lesions. The default typical 5ttgen route (fsl) will likely segment lesions as GM, and this is inappropriate in ACT context. You could e.g. segment lesions using an external tool, and replace the false positive GM segmentation in lesion areas in the 5TT file by WM instead for example. WMHs, in the end, are still WM, with axons simply running through those areas.

That looks like a more outdated context. No worries, 3-tissue CSD and ACT are separate and unrelated in this regard.

Otherwise, I’ve been hearing people looking into the recently published COMMIT2 for connectomics; where anatomical constraints and quantitative tractography are unified, based on the GM parcellation used. Depending on the accuracy of GM parcellation, lesions might already be absent from it (which is a good thing). The main benefit is that it also deals up to a large extent with the false positives common to probabilistic tractography. They’re still tweaking the code to be more easy to install and more user-friendly in general I believe; but it looks very promising. Certainly one to keep an eye on. You could feed it the total (voxel-wise) FD map directly.

Cheers,
Thijs

1 Like

Hi Dr. Dhollander,

This clarifies many things for me - thank you once again! I am interested in doing both FBA and a connectome analysis in my data, and so I will follow your advice on these :blush:

Thanks for sharing the COMMIT2 - I am currently using FreeSurfer and an HCP atlas for the parcellation - but I have just started working on the connectome analysis, and so I am still learning on how to do this / evaluating what is best to use, etc. :slightly_smiling_face:

1 Like

Aha, nice, so that all makes sense then indeed! :relaxed:

Yep, certainly take a look at the manuscript: https://advances.sciencemag.org/content/6/31/eaba8245 . I found it a very interesting and intuitive read. It’s a nice selection of experiments to validate it as well; very insightful. Certainly don’t overlook the somewhat hidden supplementary document; lots of food for thought in there. It’ll be interesting to see how it affects conclusions on existing connectomics studies.

Good luck with the analyses!

Cheers,
Thijs

1 Like