mtnormalize for DWI

Hello

I am interested in applying mtnormalize directly to 4d DWI data rather than the fod and was wondering-

  1. if for some reason this wasn’t sound? From what I can tell, it at least does a beautiful additional inhomogeneity correction.

  2. if I wanted to apply another multi-tissue model down the line, is there some way to take advantage of various tissue compartment inputs, again, in the context of a DWI input rather than the fod?

Any advice would be greatly appreciated!

Thanks so so much in advance

Welcome @T.M.N!

I am interested in applying mtnormalize directly to 4d DWI data rather than the fod

I can read this two ways:

  1. You want to use DWI data to estimate a bias field.
    The very premise of the mtnormalise algorithm is that there are certain expectations of the data following a multi-tissue decomposition that are broken in the presence of a bias field, and therefore enable estimation of that field. Without a multi-tissue decomposition however, the theory is inapplicable from the outset.

  2. You want to apply the bias field estimated by mtnormalise to correct for the bias field in DWI data.
    This is easy: The -check_norm option outputs the estimated bias field, which can then be applied to the DWIs from which the ODF images were estimated.

Cheers
Rob

Thank you so much for your quick and helpful response! I am so sorry to have missed this and come back to it months later (many apologies). You did in fact answer my question in 2 and I was easily able to use the the multi-tissue bias field output to correct my DWI (!!).

@Julio_Villalon and I do have a very naive follow-up question: as I mentioned, the first time I attempted this by just supplying the full DWI to mtnormalise rather than each tissue FOD. The output DWI somehow actually looked quite reasonable–i.e.normalized values and fewer inhomogeneities. I’m a bit embarrassed to ask, but could you confirm whether or not that output is “usable”/what mtnormalise does in that case? I understand from your response that it requires a multi-tissue decomposition to properly estimate the field as described, but would the output DWI be nonsense or usable for further analyses?

Thanks again for your time and help

Well, it’s maybe not wholly surprising that it might look like there is less of a bias field in the output of such an operation than there was at the input. But I would certainly not take that as evidence that mtnormalise can be used on DWI data. Doing so gives mtnormalise more degrees of freedom than it should have, and does not overcome the fact that its fundamental premise is disobeyed.

The initial version of the mtnormalise method itself is presented here. It’s not quite the same as the current code, but the basic assumptions are the same:

  1. If a bias field is present, the sum of tissue contributions in brain voxels will deviate from 1.0;
  2. If a tissue response function is too large / small, the ODF for that tissue will globally be too large / small.

This is solved by finding a smoothly-varying bias field, as well as a multiplicative factor for each individual tissue, that makes the sum of tissues as close to 1.0 in every voxel within the mask as possible. All of this is entirely justified by the process leading to an unwanted bias field, and the fact that response function estimation is imperfect and how that manifests in the ODFs.

Now consider what happens when you provide as input to mtnormalise a DWI series. It will treat each individual DWI volume as its own “tissue”, and try to figure out a multiplicative factor for each individual DWI volume, as well as a smoothly-varying bias field, that makes the sum of all DWI signal intensities equal to 1.0. This doesn’t make a great deal of sense.

Let’s imagine, hypothetically, that we try to do something slightly less daft. Let’s use dwishellmath mean to get the mean signal intensity in each b-value shell, removing the unwanted orientation information and forbidding mtnormalise from scaling individual DWI volumes up or down relative to one another1. Now the problem that mtnormalise attempts to solve is: find a multiplicative factor for each b-value shell, as well as a smoothly-varying bias field, that results in the sum of mean shell intensities across b-value shells being the same value in all brain voxels. Question is: Why should the sum of mean shell signal intensities be a constant value? Imaging that you have data with b=0 and b=3000 data. This would only work if the sum of the CSF value in b=0 and the CSF value in b=3000 were to be equivalent to the sum of the WM value in b=0 and the WM value in b=3000. There’s nothing guaranteeing this to be the case.

The fundamental premise in mtnormalise - that the sum of tissue contributions in each voxel should be 1.0 - is simply not applicable unless a tissue decomposition has first taken place. Applying it to DWI data will probably do some bias field correction, simply because some areas of the image are darker or brighter than others when all DWI volumes are summed together. But if you were to start with a blank sheet of paper, wanting an algorithm tailored for correcting bias fields in DWI data without necessitating a tissue decomposition, mtnormalise is definitely not what you would come up with.


1 While this can happen due to signal drift, solving such requires its own tailored approach.