The issue here arises because the smoothly-varying bias field that is estimated within the processing mask is applied throughout the whole image. You can think of this as the “shape” of the bias field being projected beyond the outer extremities of the mask. Now this is actually a desirable feature - we specifically recommend not using the dwibiascorrect -fsl algorithm precisely because it doesn’t do this - but in some extreme cases that extrapolation of the bias field beyond the brain region can lead to extreme values.
What I would suggest doing in your case is using the mrcrop command to reduce the field of view of the DWI data to only encompass the brain region. The field estimated by mtnormalise will be exactly the same, but the extreme extrapolated values in the nose region won’t be applied because the image no longer extends that far spatially.
If you’ve got a good brain mask (which is certainly the case in your screenshots) that positively includes all regions of the brain you’re interested in, you might as well provide that mask via -mask to the CSD step itself. This will not only reduce computation time quite drastically, but simply not compute FODs in non-brain regions. Depending on what subsequent processing you’re planning on doing, this might in fact be highly desirable to avoid accidentally making use of those non-brain FODs. Doing so will basically also avoid applying e.g. bias fields far beyond the brain mask from which they were estimated.
Just make sure you’re 100% happy with your brain mask and all regions it includes before proceeding. Even when you’ve got a good brain mask like the one you show, it will typically still not include e.g. parts of most cranial nerves. I’ve recently been involved in several works where we had the sensitivity to detect or observe effects in such nerves; but of course it always required these to be present in the mask.
Thanks a lot for your answers. This clarifies many things, including why I always get some tiny little FODs all over the image even outside the patient’s head!
I suppose that doing mrcrop on the FOD file with the -mask option is equivalent to providing the same brain mask to the CSD, except the second option will save me some computational time beforehand. Is that correct?
Well yes, but not via mrcrop though! Use e.g. mrcalc with -mult to multiply the binary brain mask with the FOD image (or any other image, for that matter) to set all values outside of the binary brain mask to zero. mrcrop has a different purpose: that one is to crop the entire field of view of the image (i.e. the size in voxels of the “bounding box” of your image). The mask in mrcrop is used to determine the size/extent of the that bounding box. Also, it won’t “mask” your image actually, only change the field of view.
Absolutely. Not even “some”, but likely a lot: in a typical field of view, there’s a very large number of non-brain voxels compared to brain-voxels. Give it a shot, and see how much is impacts computation time for sure.
I did the test on one subject and it took 3’10’’ without -mask vs 55’’ with -mask.
Multiplied by hundreds of subjects, indeed it makes a huge difference!
Plus I like the clean result with the mask option, and no more extreme values outside the brain when I normalise
I have quite some data sets where mtnormalise reports large multiplicative factors outside the brain and upon visual inspection, it is clear that a bias field is introduced rather than removed inside the brain. Reinspecting the entire cohort, whenever there are crazy multiplicative factors outside the brain, there is always also a dark ‘hole’ at the center of the brain. So I think we should be careful to dismiss these effects in general.
Outside of the brain (i.e. beyond the brain mask, minus outlier voxels as detected by mtnormalise), the field just gets extrapolated; and because it’s a polynomial, that quickly goes nuts of course. You’ll see that visually mostly by extremely large values (of the multiplicative field), because it’s also modelled in a multiplicative fashion; so the polynomial modelling lives in the log space. So that’s entirely expected behaviour for sure. Other than that, even though outliers are iteratively detected and removed from anything that’s initially “brain mask”, there’s of course a limit to this as well, because outliers are only outliers until there’s too many outliers. But because these outliers have to be detected iteratively and are co-optimised with the bias field (and tissue scaling factors), this is a tricky balancing exercise. Long story short, it’s also best if these outliers don’t appear e.g. at the edge of the mask in large coherent areas. The latter scenario can arise if the brain mask is too generous “around” most of the brain outline. dwi2mask wrt this property works quite well (even though challenged in other areas), and sticks close to the outline of the CSF. This makes sense, because it works on the b=0 and b=… signals themselves. This is also the area where 3-tissue signals (i.e. sum of all tissue signals) will produce the values that mtnormalise works sensibly on. So in this scenario, all is reasonably well, apart from maybe a smooth pattern. Problems can arise if the brain mask comes from another source, and is a bit too generous, even if only by 2 or 3 voxels yet consistently around most of the brain volume. In that case, these patterns can indeed appear or grow to more significant proportions. It might help to simply erode the mask then by a few passes, and provide that as a mask instead to mtnormalise. Still, regardless the factors outside the brain are extrapolated and thus nonsensical for sure; so they’re only meant to be used within the brain.
In the end, this operation is always a bit ill-defined because we’re dealing with other signal variations (e.g. T2 shine-through) that also appear in spatially coherent patterns. E.g. portions of the CST are often picked up as outliers, which is kind of what has to happen to “retain” these signal variations; i.e. not treat them as if they’re a bias field.
It just is what it is… a possible way to normalise. Interpretation of the results of analyses of e.g. “AFD” after this step has to be done accordingly. That’s ok, I think. There’s bigger challenges with this interpretation of “AFD” in any case when lower b-values are thrown in the mix, I reckon.
mtnormalise does what it says on the box; researchers have to make sure they get their interpretation right, and adapt it to the relevant context of the experiment. There’s different ways of normalising, with different interpretations.
As already pointed out by @rsmith above, the extreme values of the multiplicative field are indeed not problematic and entirely expected. Also, my comment has absolutely no bearing with the issue of AFD interpretation after normalization.
What I wanted to point out is that mtnormalise can easily fail quite dramatically, probably due to problems with the brain mask, and when it does, this seems to go hand in hand with large spurious densities outside the brain. So I would be careful dismissing those spurious densities altogether, as they might still signal another underlying problem that affects also the voxels well within the brain.
When the spuriously large FODs outside the brain occur, it typically comes as the result of an appropriate correction inside the brain, but arises because of the extrapolation of that field outside of the brain. If there are data for which mtnormalise is failing to appropriately correct the bias field inside the brain, then the spurious FODs outside the brain become a secondary observation; the command has not succeeded in its primary function. This would therefore be worth exploring in its own dedicated thread, since I expect in this case the data will appear quite different to that of the original post, and the diagnosis will be far more esoteric.
We could also consider adding a warning to mtnormalise if the maximal absolute value of the field in the log domain exceeds some threshold anywhere in the image; I however don’t have a wide enough gamut of data to determine such a threshold.
…this isn’t really the feature to be going by to check for issues. If there is an issue at all, it should be very visible within the brain region itself (so that’s the primary feature I would recommend to be on the lookout for). The intensities outside the brain happen all the time; that is the logical consequence of mtnormalise doing its job on non-brain intensities further away from the brain (not those of the brain), and that itself extrapolating easily since there isn’t much stable behaviour to go by on those intensities far outside the brain, and typically also not a critical mass of voxels to do anything sensible “further out there”. The latter is more a feature of the brain mask fed to mtnormalise.
If you do want to “anticipate” problems within the brain area itself, i.e. where it matters, those should be relatively clear to spot or anticipate beforemtnormalise is run actually: it would indeed be a bad brain mask, or alternatively a too large spatially coherent mass of dMRI signal compositions in the area of study that would not comply with mtnormalise expectations, which is the bit I meant by interpretation, so as to say. The correct or at least intended behaviour of mtnormalise aligns closely with an expectation, and thus an implied interpretation of the signal compositions fed to it.
I haven’t spotted the issues you described at all yet, and this in a very large range of kinds of data and data qualities so far; possibly because of taking the correct precautions e.g. wrt the masks as well as the signal compositions being compatible with the correct interpretation that mtnormalise relies upon. If you do face this very frequently (seemingly implying a consistent problem), this is of course a matter of concern, and I would strongly recommend checking the data or processing steps (e.g. including mask estimation) up to that point. mtnormalisewill do what it says on the box, and by its definition, it does that in a quite straightforward manner; so the preconditions for it to go catastrophically wrong in the first place can actually be quite well “defined”. But it depends what you’re really asking it to do and for which set of voxels (i.e. mask).
To allow for an insightful way in QC’ing the output and even diagnosing possible problems, it’s also always a good idea to output the -check_mask option: close to and within the brain this should correctly highlight (as in: exclude) the voxels that don’t closely fit the mtnormalise interpretation of signal compositions. However, once more, far from the brain, it 1) won’t show that feature, because the brain’s signal compositions themselves inform that feature, and 2) not showing that feature doesn’t matter.
Very close to and within the brain, on the other hand, it should exclude non-(brain+CSF) voxels, as well as frequently a certain specific pair of subcortical GM structures, and e.g. some central portions of the CST (among some other WM structures, due to the same reason as the CST).
So there’s a number of ways to be extra-careful if required due to whatever makes the situation unique; but my main point is that those extreme intensities are probably one of the least “reliable” features to go by to flag issues: they happen in a range of scenarios by definition, without issues. Also, it’s possible to imagine problematic scenarios where this feature would be absent.
We all agree high correction factors outside the brain are not problematic in and of themselves, and that if you want to check for correct operation, the only place that matters is within the brain.
Nonetheless, I think Ben’s point is valid: if there are high intensity regions outside the mask, it may be indicative of issues with the procedure, so it’s worth checking that everything has worked OK – especially if the bias field wasn’t too bad to begin with. So rather than ignoring them completely, treat them merely as an invitation to double-check your data.
Following this discussion, I’ve had a look into the effect of the mask on mtnormalise, and I have to admit that it is indeed a lot more sensitive to the mask than I’d appreciated… Based on this, I think we should at the very least update our documentation to recommend the use of a very conservative mask for this particular step.
Thanks for producing that example; this is actually incredibly helpful to support my points above, and the good thing is, that this also makes it very clear what users should sensibly be checking. Just briefly:
Well yes, but all the other cases hold just as well, and additionally note that:
…the talk was about the brain, not the mask.
By “all the other cases”, I mean:
if high intensity regions outside the mask (brain), it may also not be indicative of issues
if no high intensity regions outside the mask (brain), there might in fact be issues as well, if the mtnormalise purpose/model/assumption/interpretation/ …you-name-it… doesn’t hold
if no high intensity regions outside the mask (brain), there might also be no issues of course
This holds in general. The aforementioned feature is not a good indicator to inform this call; it would be misleading to use it for that purpose. The within-brain region is a better indicator after the procedure (see your example, I’ve copied it somewhere below for visual reference; the screenshots are helpful). But even better, as I mentioned above, is to do a very basic sensibility check of the masks. Your other screenshot is extremely visually clear to illustrate this very point.
By extension, because a new point is being added here (the latter part of the quote below):
I think I understand where you’re coming from, or what you’re referring to, but as written, I don’t think that’s entirely right; if slightly the opposite. If there’s little to no bias field, mtnormalise will by definition be more reliable, as it can make a better call on what is outlier or not right at the start. I suppose you were referring to the introduction of the rim kind of effect being easier to spot potentially, which is sensible. But you’d see the rim regardless of where the algorithm was starting of from, if it arises. What I mean is that it doesn’t even require comparison with the “before” state (e.g. summed image): it’s easily identifiable on its own in just the output (e.g. output summed image; or often just one of the tissue signal maps). But as mentioned above (and below), this is really straightforward to anticipate when checking masks before mtnormalise.
So I’m not entirely sure about that, as I argued above. Beyond the point itself strictly speaking, and this is where we’re going into subjective territory (just to flag that upfront, before we get dense about anything): because this isn’t all that clear of an indicator (at all, if you ask me), I think you should be extremely careful with advise based on this. I fully appreciate your consciously careful wording, i.e. “an invitation”, and I’m not against the idea behind that. But, as we all know, these subtleties easily get lost, and this can (will) be interpreted by readers with an interestingly wide range of levels of alarm. I can imagine that might actually end up being counterproductive in a lot of scenarios. So once more, even if a more subtle, argument to think well before a certain feature is picked as an, even carefully formulated, invitation to double-check. But once more, your screenshots are very helpful, as they show very well what the alarming scenario would be, even without words.
So let’s copy that bit here, as I believe this is actually very helpful for future readers to get exactly the visual they need to follow along with all the words here (source: GitHub):
Some direct advise I can formulate for users here:
The summed image is very helpful, I would absolutely recommend to take a look at it. In practice, if you’re obtaining the relevant brain mask earlier in the pipeline (typically the case), the b=0 image will also be helpful to spot the boundaries of the brain (even already including CSF surrounding the brain, but that’s ok). Also note that you can still use the summed image down the track, and if needed do something about the mask you would be providing to mtnormalise via the -mask option. Just don’t extend the mask at that point (if you don’t have tissue signal map values beyond it); only shrink it if needed.
If this is a very small subject, with (in absolute size) a small brain, naturally be more careful. The less brain voxels, the more the other non-brain voxels might proportionally be a problem.
The problematic feature is specifically a (1) coherent cluster / mass of voxels, (2) immediately connected to the brain, (3) and this with a large interface where it connects, (4) and this at a point where on the other side there is no other relevant brain tissue (a cryptic way of, most of the time, saying: on the outside of the brain, i.e. surrounding it). Note these points very carefully: there’s other weird or inaccurate brain masks that don’t have these features: e.g. those containing one or two eyeballs, or a clump of voxels in sinus regions. The latter don’t introduce the problem described here. The easy way to describe features 1 to 4 here in practice would be the “rim” that I mentioned somewhere in an earlier post, and which is illustrated in these screenshots.
So now that you’re all equipped with the tools to detect the problematic feature, take a look at the first screenshot on the far left in the second panel with the yellow masks. This is the non-dilated mask. Can you spot the problematic feature? I can: it might not be entirely surrounding the brain, but it’s already significantly present on the left and right sides of the brain: I count an extent up to 4 voxels wide on the left (of the screenshot orientation), along a reasonable “length” of the brain there. On the right of the screenshot, it’s about 2~3 voxels, over a very long length. Also one voxel at the bottom and top, but that’s very little of course. So for me, this would be a good reason to already check the results over. But they’re still without problems. So all good.
Now gradually move through those yellow masks, to the second panel, finally to the third. Observe the rim in the third. Is that a small rim? No, really not. This is the feature you should be looking out for. But also, you should be checking how you have that by default in your pipeline; that’s not good. 2~3 voxels at least around the whole brain. 5~6 left and right side of the brain. The summed image indicates all the voxels where mtnormalise does not hold, yet you’re asking it to fit those as well. So it will fit those, and that’s not what you want.
Finally, a useful addition to the top row of figures maybe: the bright rim on the far right panel there is a good feature to highlight it, within the brain. But it would be useful to, for the sake of appreciating the true nature or extent of the problem, compute the relative difference in magnitude with the second panel (which would be less or not affected and mtnormalise corrected). This because the black “hole” in the middle looks so dramatic due to min-max windowing of the image. It’s actually mostly a bright rim, not a dark hole. So the middle of the brain is less affected than it might appear to be, even in this case of a very wide rim (see above). The rim of true brain voxels that are affected a lot, is naturally steered heavily by the equally wide rim of voxels that are non-brain (see above).
Long story short: take a look at those masks, really. It’s a very clear feature that can avoid problems up front. And check where in your pipeline you would get masks like this that are supposed to be brain masks. That’s a cause of concern.