At the moment, most default commands, manuals and pipelines we offer don’t actively do anything about the problem you describe: if you concatenate all acquired data, any b=0 images in there are regarded as not having any such global intensity differences between them. More importantly maybe: that also goes for the diffusion weighted images as well. I’m mentioning the latter because, if you see such differences between your b=0 images of different acquisitions, likely (if not surely) the diffusion weighted images will be similarly affected. This will matter (potentially a lot) if you’re trying to compute e.g. ADC values, or anything that involves all your different b-values and assumptions of the contrasts and intensities that come with them.
Here we maybe also need to avoid misunderstanding each other by clarifying terminology: it’s a bit strange to say your statement (although I understand what you’re referring to), it’s probably better to say: the b=0 is different for different acquisitions, and you happen to use different acquisitions for your different shells, and all of them happen to come with b=0 images in their respective acquisition.
The solution to merge these acquisitions properly, would indeed include some form of intensity normalisation. In fact, the b=0 images aren’t the problem here, but probably key to the solution, as they are the only contrast that matches across your acquisitions (whereas the diffusion weighted images are all different per shell, and per acquisition in your scenario). You should look to correct each acquisition by a globally constant factor that then results in all the b=0 images across acquisitions to match (as good as possible) in overall intensity. Does that make sense? To do this, I can think of several different approaches; but not all (if any) can be very easily (read: with a single command that does the job) performed in MRtrix3 currently. The simplest solution (from a practical point of view, and with the tools on offer) is maybe to compute just the average intensity across a brain mask for all b=0 images in each acquisition, and compute the factor such that these average intensities match.
dwiextract can extract all b=0 images for you,
mrmath can average the b=0 images in each acquisition,
mrstats can compute the average intensity within such an average b=0 for each acquisition. A bit convoluted, but it would work.
@bjeurissen, @jdtournier: with MSMT-CSD in mind, you must’ve either had this yourself already, or users in your environment stumbling upon this and asking you for help…? What do you typically tell them / offer them? Maybe we should offer a simple tool that provides a rough strategy, such as the above or otherwise, to perform this step? What do you reckon?