Dwi2mask: holes in mask images

Hi @rgrazioplene,

dwi2mask uses a very simple strategy, and is indeed not perfect. The first thing to check here is probably whether you have corrected for bias fields (intensity inhomogeneities)?

If not, maybe have a go at dwibiascorrect first, and see if dwi2mask performs better after correcting for bias fields. :slight_smile:


1 Like

Probably unrelated, but it might have an influence on this: it looks like your images have much higher in-plane resolution than through plane. Is this the case? If so, is this due to zero-filling interpolation on the scanner (the default on GE and to some extent Philips scanners)? I’ve typically found that images interpolated in this way don’t produce such good results - sometimes a lot worse. And this is particularly the case for noisy images, since interpolating noise produces strange-looking images and introduces funny artefacts in the reconstructed images (in fact, someone once told me their on-scanner DTI-based fibre tracking performed better once they figured out how to disable zero-filling). This may interact with dwi2mask at least to some extent. I reckon it would be worth figuring out if this is the case, and trying to disable it if you can - although I realise it is a bit late for your study now that the data have been acquired…


I also encounter this problem : I get holes in low SNR region (particulary in the deep brain structures), which make sens if one take a threshold approach.
to solve the problem I just took the mask from fsl (generated with bet2)



Thanks for the suggestion! I considered this–however, unless I’m misreading something, dwibiascorrect needs a brainmask to run, so there’s a bit circularity problem.

Awesome–That is what I decided to try as of last night! I’m VERY glad to hear that you’ve had success with fsl masks. I was not looking forward to a return of the not-so-nostalgic days of manual volume editing.

Yes, data collection is over–it was collected on a Siemens scanner, but by no means does that rule out zero-filling interpolation.

Hi Romain, did you also need to generate masks for upsampled/scaled images (e.g. for the fixel based analysis pipeline)? The instructions here say to compute a mask on the upsampled normalised dwi images (scaled to 2.0). Although these images have been bias corrected and normalised, I’m still getting holes when I run:

dwi2mask input_upsampled_dwi output_upsampled_mask

Do you (or anyone else) know if it would be appropriate to do the following instead?
mrresize input_dwi_original_sized_mask -scale 2.0 output_upsampled_dwi_mask

Yep, exactly. The two problems are definitely “connected” in that sense. However, the bias correction is a little bit less dependent on a “perfect” mask in order to generate a reasonable solution, whereas the threshold-based (at its core, plus some other additions) masking method is much more dependent on a reasonably good bias correction in order to do its job reasonably well. A lot of "reasonably"s in there. :wink:

In practice, a (“reasonable”) approach given what’s currently available in MRtrix is to run dwi2mask first to get an initial mask (i.e. like the one you showed; imperfect but ok’ish at this stage), use that one to call dwibiascorrect, and use the bias corrected result again as an imput to dwi2mask. You could essentially just keep on iterating both to do a close-to-joint optimisation of both, but we find that in practice just doing “initial masking --> bias field correction --> final masking” will get you mostly there in a wide range of scenarios and data qualities.

We’ve actually found ourselves that this approach, rather than dwi2mask'ing the upsampled data (which is indeed what the tutorial suggests doing), gives better results in some cases. Maybe @Dave can comment on this one as well. :slight_smile:

However, take into account that the mask is supposed to be a binary (bitwise) image; so the default options as to interpolation and datatype in mrresize are not optimal for these type of images. By default, mrresize uses cubic interpolation and stores the result as a floating point image. Instead, the way to go for this scenario would be:

mrresize input_dwi_original_sized_mask -scale 2.0 output_upsampled_dwi_mask -interp nearest -datatype bit

…i.e., nearest-neighbour interpolation and storing the result as a bitwise image.

1 Like

Thanks so much for your helpful and informative replies. I tried this method, and it seems to be working.

I’ve been having the same issue, working with a lot of older adult data and the wholes pop up everywhere. Played around a little with the clean_scale parameter, and though it grew/shrank the mask, the holes were still there (and it started including eyeballs). Tried bias correction, no big difference. At this point I’ll have to rely on bet (how i hate playing the fractional intensity game!).

I get the impression that dwi2mask is doing something more than the geometric bet solution, especially for dwi data with multiple bvals. Could any of you comment on how the computation compares between bet vs dwi2mask?

I’ll leave discussion of the mask cleaning filter to @Thijs, but I’ll have a go at this:

I get the impression that dwi2mask is doing something more than the geometric bet solution, especially for dwi data with multiple bvals. Could any of you comment on how the computation compares between bet vs dwi2mask?

bet and dwi2mask are completely different algorithms, apart from the intended outcome.

  • bet attempts to draw a closed mesh surface around the brain using vertices and triangles. AFAIK, it’s intended to operate primarily on T1 images, but people still use it on DWI data, presumably using the mean b=0 image? So it tries to place the vertices of the surface at the outer edge of the brain (where the intensity gradient is greatest), and then fills the voxels within that surface only when generating the output mask image.

  • dwi2mask is a purely image-based approach. It assumes that for each b-value shell (mean value across directions for b != 0), it’s possible to select an intensity threshold that separates brain from non-brain. There are then image-based filtering operations that try to make the mask as ‘brain-like’ as possible (e.g. removing any voxels not connected to the ‘biggest blob’ (the brain), filling any holes, combining results across shells), but fundamentally it’s an image intensity-based segmentation - hence why a B1 bias field can have such a large influence.

Both are far from bullet-proof solutions unfortunately. I’m sure a lot of us have had ideas for better ways to solve this problem, but it’s perhaps not the most ‘fun’ thing to work on…

Wow, super helpful information! TBH I’ve used bet for different reasons in different contexts–its nice to do a very lenient skull stripping just to remove all the non-zero voxels outside the skull, to cut down on file size and computation time. Lately I’ve been wanting to coregister my DWI, T1, T2, and MTI data together, so the bet mask helps.

But more to the problem in this thread, I’ve found over the past couple of days that a back-and-forth procedure is working best, with first a very lenient bet on all 3 b0’s (f = 0.2), average the b0’s, bias correction, then dwi2mask. Dwi2mask still leaves a bit of noise outside the skull, so I might insert another bet at the end if i can figure out the right values.

Thanks so much!

Minor correction: also for the b=0 “shell”. Otherwise, you’d get a big hole in the mask quite often (the ventricles, as they are often connected to the space outside the brain).

Yes, that makes sense. More generally, to quote myself from somewhere earlier in this thread:

The bias field correction should mostly do away with the (too) low intensities that cause the holes, even when this bias field correction itself is provided with a sub-optimal mask that has holes, as the bias field is expected to be relatively smooth.

The strength of dwi2mask is that, by using all of the average b=… contrasts, it finds the “exact” boundary of the brain (including CSF) pretty well in most places (after bias field correction, that is). Its weakness is that it may also find other non-brain bits (e.g., eyeballs), and even after the final step only selects the biggest connected component, some of those other non-brain bits can still be connected to the actual biggest component we’d like to end up with (e.g., an eyeball via the optical nerve, if the latter appears clearly enough).
The mask cleaning filter (that can be controlled via the clean_scale parameter; but 2 is a very good default for typical resolutions) is designed to attack those attached bits in a conservative manner: it’ll cut them loose from the brain if they’re connected via a thin bridge, but only if the bit is bigger/wider than that bridge; think again of an eyeball attached via the optical nerve. Due to how it works, the mask cleaning filter typically doesn’t create any big holes in the middle of the brain; so there shouldn’t be any need to tune it down or switch it off. As you noticed, that’ll only result in the inclusion of those (eyeball and other) bits. In a way, it’s a shape prior that assumes the brain is a single big connected and “compact” blob. It’s also not cutting off thin extensions that don’t end in another bigger blob, and when cutting of another bigger blob, it won’t cut the bridge too close to the actual brain, as this may just be a slightly sharper feature of the actual brain. But at least it should get rid of the biggest mess (if such mess is present).

bet only works on 1 image, so doesn’t benefit of all b-values. That challenge is “tackled” by a user controlled parameter. When set right, it can work decently on some data, but as you probably noticed, the challenge is in setting that f parameter.

For us, dwi2mask is mostly convenient, because it works directly on the dMRI dataset, and exploits most of the useful information in there (with regards to getting a mask), with minimal assumptions and no crucial parameter. The mask cleaning filter is a simple addition that focusses specifically on those extra bits we also typically observe in some datasets.
In most typical processing pipelines in MRtrix, the mask serves just to limit the computations of the more expensive algorithms (e.g. CSD, fixel segmentation, etc…) to just the set of voxels that will reasonably be needed. Also, for some algorithms (e.g. registration), we do need to get rid of the bulk of any extra mess, so those voxels/areas don’t bias or distort the outcomes of such algorithms. But note that dwi2mask (and to be honest, bet as well) should never be used for an accurate, and definitely not precise, segmentation of the brain parenchyma (+CSF); i.e., don’t use it for brain volume measurements or something; at least not without inspecting the masks and cleaning them up manually if needed.

So I reckon this is an important question you should ask yourself as well: what will you be using the mask(s) for? And consequently, how much time and effort is it worth spending to make them absolutely “perfect”? And what aspects of the mask are most important for your application?

I’m having success with a similar procedure.

I perform bet2 on the meanb0 image post-eddy correction, then use the subsequent mask for dwibiascorrect. After this, I use dwi2mask on the biascorrected image.

For high b-value date, It’s worthy to note that this procedure only works when using -fsl argument into dwibiascorrect.

With ants correction:

Now FSL:

For high b-value date, It’s worthy to note that this procedure only works when using -fsl argument into dwibiascorrect.

Do you have a sense as to why this is the case? While the two algorithms definitely give different bias field estimates, I wouldn’t have expected them to vary enough for the subsequent performance of dwi2mask to vary so massively.

This can come down to a small difference as well. @Alistair_Perry’s example above would be such a case: dwi2mask at some point fills holes in the mask; the outcome of that step can vary massively based on whether such a hole would be “closed off” / encapsulated in the mask (in which case it’s successfully filled), or whether there’s even a single-voxel wide path still connecting the hole to the outside (where it wouldn’t fill up, such as in case of the yellow mask shown by @Alistair_Perry).

There’s not much that can be done about that, within the constraints of the data driven approach of dwi2mask, but having a more properly biasfield-corrected input dataset definitely helps a lot to avoid this problem happening. Iterating dwi2mask with bias field correction would eventually do the job as well. It’s ultimately a chicken-and-egg problem, where a good mask benefits bias field correction, and good bias field correction benefits (intensity driven) mask estimation.

Can you give me a hint for fix it manually?? I got same problem

In any case, mask dilate can help. Here it is.

Maskfilter using median option may also be an alternative to create good mask.


1 Like