I assumed that that contributions per tissue type from FODs would be summed up to 1 (or any another valuer that can be set)
Not unless you directly intervene to do so. The reconstruction is simply the weighted sum of the input response functions (with angular distribution in the case of lmax > 0) that most closely matches the empirical DWI signal, given the non-negativity constraint. There is no enforcement - or even regularisation - of what the sum of those weights must be. This is an important distinction between our specific implementation of CSD and basically every other diffusion model. There’s some thoughts on the subject here, as well as the CSD / AFD / MSMT manuscripts.
Indeed, another way to think about it: If the contributions per tissue type summed to 1.0, then mtnormalise
would have no purpose; the entire raison d’etre for that method is that these contributions don’t sum to 1.0.
However, the normalizes CSDs coefficients from multi-shell muti tissue CSD that I normalized do sum up to 1.
This sounds like you are performing a direct intervention to force the signal contributions from the tissues to sum to 1 in each voxel? That’s not an invalid thing to do; it just has consequences for how such data should be interpreted.
Can someone please clarify how the normalization in log space is done?
One way to think about mtnormalise
is: rather than forcing the sum of tissue fractions to be precisely 1.0 in every brain voxel individually and independently, it pushes these sums of fractions toward 1.0, in a way that enforces spatial smoothness of the multiplicative factor that is applied. Any B1 bias field present in the imaging experiment is expected a priori to be spatially smooth, and so correction of such should obey this expectation; conversely, if the sum of fractions is forced to be precisely 1.0 in every voxel, the multiplicative factors applied to make this the case may vary abruptly between adjacent image voxels. That said there are analyses where forcing the signal fractions to sum to 1.0 is desirable, or even necessary; they are just different forms of normalisation with different goals.
Use of logarithm space within mtnormalise
is simply a way to make the bias field estimation and intensity normalisation better-posed both mathematically and pragmatically. In the extreme case, an estimated multiplicative field extrapolated beyond the region in which it was estimated could become negative, and so correcting the data using that field could result in negative ODFs; by estimating the field in logarithm space, and applying the exponential transform to such in order to obtain the multiplicative factor in each voxel, it is mathematically impossible for those factors to be negative. It’s not an uncommon numerical optimisation technique whenever data are intended to be multiplicative, which is the case both for the bias field and the tissue balancing factors in mtnormalise
.