Mtnormalise

Dear All,

I am a bit confused about the mtnormalise command. I assumed that that contributions per tissue type from FODs would be summed up to 1 (or any another valuer that can be set). However, the normalizes CSDs coefficients from multi-shell muti tissue CSD that I normalized do sum up to 1. Can someone please clarify how the normalization in log space is done?

Best,
Oeslle

I assumed that that contributions per tissue type from FODs would be summed up to 1 (or any another valuer that can be set)

Not unless you directly intervene to do so. The reconstruction is simply the weighted sum of the input response functions (with angular distribution in the case of lmax > 0) that most closely matches the empirical DWI signal, given the non-negativity constraint. There is no enforcement - or even regularisation - of what the sum of those weights must be. This is an important distinction between our specific implementation of CSD and basically every other diffusion model. There’s some thoughts on the subject here, as well as the CSD / AFD / MSMT manuscripts.

Indeed, another way to think about it: If the contributions per tissue type summed to 1.0, then mtnormalise would have no purpose; the entire raison d’etre for that method is that these contributions don’t sum to 1.0.

However, the normalizes CSDs coefficients from multi-shell muti tissue CSD that I normalized do sum up to 1.

This sounds like you are performing a direct intervention to force the signal contributions from the tissues to sum to 1 in each voxel? That’s not an invalid thing to do; it just has consequences for how such data should be interpreted.

Can someone please clarify how the normalization in log space is done?

One way to think about mtnormalise is: rather than forcing the sum of tissue fractions to be precisely 1.0 in every brain voxel individually and independently, it pushes these sums of fractions toward 1.0, in a way that enforces spatial smoothness of the multiplicative factor that is applied. Any B1 bias field present in the imaging experiment is expected a priori to be spatially smooth, and so correction of such should obey this expectation; conversely, if the sum of fractions is forced to be precisely 1.0 in every voxel, the multiplicative factors applied to make this the case may vary abruptly between adjacent image voxels. That said there are analyses where forcing the signal fractions to sum to 1.0 is desirable, or even necessary; they are just different forms of normalisation with different goals.

Use of logarithm space within mtnormalise is simply a way to make the bias field estimation and intensity normalisation better-posed both mathematically and pragmatically. In the extreme case, an estimated multiplicative field extrapolated beyond the region in which it was estimated could become negative, and so correcting the data using that field could result in negative ODFs; by estimating the field in logarithm space, and applying the exponential transform to such in order to obtain the multiplicative factor in each voxel, it is mathematically impossible for those factors to be negative. It’s not an uncommon numerical optimisation technique whenever data are intended to be multiplicative, which is the case both for the bias field and the tissue balancing factors in mtnormalise.

Dear @rsmith, Thank you very much for a detailed explanation.

Nonetheless, it is still not 100% clear for me how this normalization is done. If I may, I would like to ask you a few more questions on the topic.

it pushes these sums of fractions toward 1.0, in a way that enforces spatial smoothness of the multiplicative factor that is applied.

How does that happen? If it is a long answer, can you please suggest a reading so it becomes clear for me? Also, I directly enforced -value to be 1, but the CSD coefficients did not sum up to 1.0.

This sounds like you are performing a direct intervention to force the signal contributions from the tissues to sum to 1 in each voxel? That’s not an invalid thing to do;

The mtnormalise command has a label -value that is responsible for a direct invention in sum from tissues contribution. ( -value number specify the (positive) reference value to which the summed tissue compartments will be normalised. (default: 0.282095, SH DC term for unit angular integral). By using the default configurations tractography from the normalise wm_fod generate more spurious streamlines than using the non-normalized one.

it just has consequences for how such data should be interpreted.

Lastly, what are the implications in the tractography by using normalized WM FODs? In my case, I perfoming ROI-based probabilistic tractography and I noticed they have more spurious using the normalized WM FOD compared to non-normalized ones.

Best Regards,

OK, there’s still a misunderstanding about what mtnormalise does. It does not enforce that the sum of tissue densities sum to the reference value on a voxel-by-voxel level. It aims to find the smooth (multiplicative) bias field (modelled as a cubic spline) that when applied to the sum of densities image, will give an average value close to the desired reference value.

It depends on exactly what you mean here, but it sounds like you’re expecting the sum of all SH coefficients to be one? That’s not what we’re trying to achieve here. The only SH coefficients being considered at the l=0 terms – i.e. the first volume of each ODF image, the DC term, the only one whose basis function has non-zero integral. What we’re looking at is the sum of the l=0 terms of each input image – the sum of the first volumes of each image.

The cutoff threshold in the tracking is set assuming that the ODF intensity has a mean value close to 0.282095 = 1/√4π (i.e. unit solid angle). If you normalise to unit mean (and I assume here you mean sum of DC, l=0 terms, not the sum of all coefficients), you are effectively using a lower threshold than would otherwise be the case (by a factor of 0.282095).

Given what you’re reporting, I have a feeling whatever normalisation you’re applying is not what we’re talking about in mtnormalise.

Dear @jdtournier,

Thank you very much for clarifying this for me. Now I have a better sense what normalization I do need it.

Best Regards,
Oeslle