Dwibiascorrect after dwipreproc

Dear MrTrixers,

I am experiencing a problem running dwibiascorrect on the output of dwipreproc. I get the following message: “dwibiascorrect: [ERROR] Command failed: N4BiasFieldCorrection -d 3 -i mean_bzero.nii -w mask.nii -o [corrected.nii,bias.nii] -s 2 -b [150] -c [200x200,0.0]”. I ve checked the output of dwipreproc and they looked correct.
However if I run eddy_openmp in FSL and then run dwibiascorrect on the nifty output it works just fine.
Any idea?

Thanks

Chiara

Hi Chiara,
When the dwibiascorrect script exits, it won’t delete the working temporary folder (at least with later versions of MRtrix). You can then change directory into this temporary folder, and re-run the command that failed. This should then give you some idea of why it failed (hopefully an error message).
Cheers,
Dave

I am having a problem too with dwibiascorrect after calling dwipreproc, it doesn’t show any errors when running, but the output image is all black (there’s nothing). What can I do?

Hi @carolinajimenez26,

For any one of us to be able to help you, you’ll have to provide at least a bit more information; for instance the output of dwibiascorrect at the command line, maybe a screenshot of your input, the exact dwibiascorrect command line you ran (i.e. typed), and your MRtrix3 version.

Cheers,
Thijs

This is the image input (preprocessed with dwidenoise and dwipreproc in thtat order):

This is the command I used:

dwibiascorrect dwi_distCorr.mif biasCorrect.mif -fsl

And this is the output I get:

I’m not sure what’s going on here, but a few thoughts:

  • the intensity value displayed in the output is shown as ‘?’, which means it’s not a finite number, most likely NaN (not-a-number) resulting from a divide-by-zero. This would indicate that FSL’s fast has produced a zero bias field, but that’s just a hunch. And this admittedly doesn’t help you much – but it’s a suggestion as to where to start digging…

  • if you try running the script with the -nocleanup option, you’ll be able to inspect intermediate files produced by the script (look at the output of the script to see where that folder is located, and all the commands invoked). It will definitely be instructive to check the direct inputs and outputs to & from the fast command. You could also run with the -debug option, which will produce potentially informative diagnostic information. Do post the full output of the command as well, by the way, not just the command itself: that’s often how we can stop what’s going on.

  • Is there any particular reason why you need to use the -fsl option? I get the feeling that the -ants approach is generally more robust (others will no doubt confirm).

Yeah, an added “problem” with the -fsl option is that it only ends up correcting inside the brain mask, and leaves the intensities untouched outside of it, creating a quite artificial image… we’re now warning stronger against using this option in the upcoming RC3 for these reasons.

It is because I already have installed fsl, but I am going to try with ANTs to see if the result is positive… Thank you for the advices, I am going to check that and will answer again this issue :slight_smile:

Dear experts,

I also ran into the same problem with one participant I am analysing, but with using the -ants option, as shown below:

My input is an eddy corrected image (via using dwifslpreproc):

And my output, upon running dwibiascorrect command, I get this:

My full command to run dwibiascorrect was this:

dwibiascorrect ants -mask brain_mask.mif eddy_corrected.mif biasfieldcorrected.mif

The mask I used to generate for the dwibiascorrect was done with dwi2mask, which looks like this:

However, I am only using the dwi2mask command to run dwibiascorrect - the actual brain mask I am using for my main analysis has been generated via FSL’s BET. I also tried iterating between dwi2mask and dwibiascorrect in order to get a better brain mask, but this does not work since the input for this is the ‘blank image’ from the bias field corrected image.

Running through these commands (dwi2mask & dwibiascorrect) on all of my other participants (n=156) worked fine, but just for this one participant, I got this result. Because of this, I am unable to generate a response function (dwi2response) and run mtnormalise for this one participant. I am also using MRtrix version 3.0.2.

So, for this one participant, I added the options for -nocleanup and -debug for the dwibiascorrect command, and found these images:

For bias.mif

For corrected.nii

For mean_bzero.mif

For result.mif

But I am unsure where the problem lies. If anyone could point me in the right direction or give any advice as to what I can do to keep this participant in my study analysis, please let me know. Also, please let me know if more information is needed. Thanks!

Update:

I applied a more liberal mask by setting the option of clean_scale value = 0 with dwi2mask, and it ran through dwibiascorrect okay. With this, I was also able to generate a response function (dwi2response) and apply joint bias field correction and intensity normalisation (mtnormalise).

If there are better solutions than this, please do let me know - thanks :slight_smile:

No, I think this is the best way to deal with this. That initial mask was really bad… I was going to ask why you were passing a mask to dwibiascorrect manually, but I expect it’ll generate the exact same mask internally if you don’t provide one, right?

In general, robust masking is proving remarkably difficult to achieve consistently, as @rsmith will no doubt confirm… :wink:

1 Like

Yes, that mask from dwi2mask algorithm was just from a first pass, and so iterating it between dwibiascorrect and dwi2mask tends to fill in most of the holes (as suggested here). Though, with my dataset, I still tend to have small holes throughout the brain mask, usually between the occipital lobe and cerebellum, even after the second or more mask pass, like so:

This seems to also be the case after extracting a mask from the upsampled the dwi data as well. So, I opted to use FSL’s BET as the main brain mask, while using the mask from dwi2mask for bias field correction for now. Though I do like dwi2mask’s more constrained inclusion of the brain matter comparably, as BET seems to include the meninges too…one of my colleagues is using the ‘erode’ option from mrcalc to constrain the BET mask more, so I might try that out, too.

Just a follow-up question to this - as I am using the mask from dwi2mask primarily for dwibiascorrect, would it be better to use a second bias corrected image from the generation from the second mask? The second mask tends to look more ‘wholesome’ like the one above, but when comparing the bias corrected images, they don’t seem to reveal too many differences…
image bias corrected image with first pass mask
image bias corrected image with second pass mask

Ah, I did not realise that I do not need to manually input a mask for dwibiascorrect…saves me a step! But yes, the mask internally generated within dwibiascorrect seems to be the same as when using dwi2mask.

And my output, upon running dwibiascorrect command, I get this:

It’s hard to know exactly what’s going on here because mrview is not just displaying one image, there’s an overlay image being placed on top; and if the overlay opacity is 1.0, that can result in the overlayed image hiding the main image, even if it’s filled with zeroes. In your case, the intensity windowing on the overlaid image means that pretty much any reasonable image would come out as grey.

Better would be to show the input and output images with the same intensity windowing, without ever touching the overlay tool, as well as the output of mrstats on both.


Actually on second thought, I think maybe I know what’s going on here:

  • There’s extreme values hiding somewhere in your DWI volume.

  • When you load the original DWI volume as the main image in mrview, the intensity windowing is determined based on the maximum and minimum intensities in the first slice you view.

  • When you load the bias-field-corrected image in the overlay tool in mrview, the intensity windowing is determined based on the maximum and minimum intensities of the whole volume (as there’s no guarantee that an overlay image resides on the same voxel grid as the main image, it could be in any arbitrary orientation, and therefore the whole volume needs to be checked in order to determine appropriate windowing).

The colour bar at the top right of the window is the intensity scaling of the overlay image. With a range of -10,000,000 to 36,000,000, almost any reasonable neuroimaging data is going to appear as a flat dark grey. Even the whitest areas of whatever image is loaded as the main image, with intensity 37,000, are sufficiently close to 0.0 with your current windowing to be indistinguishable.

There are two possibilities. Either these extreme values are actually present in your input DWI data and you just didn’t notice because of the way mrview determines its intensity windowing, or dwibiascorrect has somehow introduced those extreme values in between producing “result.mif” and mrconverting that image to your requested output image location. Or maybe it’s there in “result.mif” too, but you manually altered the intensity windowing in that case?

In general, robust masking is proving remarkably difficult to achieve consistently, as @rsmith will no doubt confirm…

If there were an appropriate emoji for PTSD I would use it ad nauseum.

Nevertheless, if you’re either interested in or frustrated with DWI brain masking, you may be interested in these changes coming for 3.1.0, which provides many algorithms to choose from.

one of my colleagues is using the ‘erode’ option from mrcalc to constrain the BET mask more

maskfilter* :grimacing:

would it be better to use a second bias corrected image from the generation from the second mask? The second mask tends to look more ‘wholesome’ like the one above, but when comparing the bias corrected images, they don’t seem to reveal too many differences

Theoretically, if you were to run some large number of iterations where you bounce between mask determination and bias field estimation, after some number of iterations there should no longer be any difference between subsequent iterations. Whether or not this is the case however depends on the precision of whatever is providing the bias field estimates, and the robustness of the brain masking algorithm. And how many iterations would be required depends on the nuances of those algorithms also.

This is an idea that I’ve actually been tinkering with myself for quite some time: iterating between these and measuring differences in binary masks between iterations to detect convergence. It can work, but it’s still constrained by the limitations of whatever brain masking algorithm is being utilised.

What I’ve settled on myself for now for my own DWI pre-processing tasks (adjusts tie ahead of shameless self-plug) is in the 0.5.0 update of my connectome BIDS App. My approach is further different (code for anyone so inclined) in that I’m utilising mtnormalise to both bias field correct the DWIs and provide a total tissue density image that I then threshold to produce a binary brain mask (on the assumption that a brain mask should include voxels where there are brain tissues or fluid and not those where there is not), which is then fed back to the start of the iterative loop. In this specific instance iterating repeatedly can be detrimental in some data because of mtnormalise's sensitivity to the inclusion of non-brain voxels in the mask, which self-perpetuates if performing iteratively, hence why I only iterate twice for this particular approach; but when I was using just dwi2mask in the same loop I could get something approaching convergence after 5-10 iterations.

Maybe I should convert that approach into a dwi2mask algorithm…

1 Like

Hi Dr. Smith,

Thank you for your input :slight_smile: I tried out a few of your tips to try and figure out things, as you suggested below:

Better would be to show the input and output images with the same intensity windowing, without ever touching the overlay tool, as well as the output of mrstats on both.

I checked the image values via mrstats, and found that the bias field corrected image is composed of many negative values (which seems abnormal?), with the min and max being at a range from -10,0000 to 36,0000.

Upon changing the range on the intensity scale for this bias field corrected image to something more of a similar intensity range to my eddy input image, I can kind of see an image of a brain, but it does look weird:

What is interesting is that, when I checked image values of the eddy input of this participant and compared it with other participants (n=156), the values are about 3 times larger:

eddy corrected image from this one ‘deviant participant’:
image

eddy corrected image from typical participants from my dataset:
image

Because of this, I wonder if it might be beneficial for me to apply some kind of calculation to this ‘deviant image’, such as division by 3, via mrcalc or mrmath, prior to applying the preprocessing pipeline corrections?

The additions for the brain mask for the next mrtrix version are looking very cool! Thank you so much for all of your hard work in this :slight_smile:

One thing that caught my eye was this:

      volume        mean      median         std         min        max     count
       [ 0 ]     2721.66     1077.81      3651.5     -117573    65861.2    778752
  1. The input to dwibiascorrect contains some extreme negative values. These can (I think) arise even in the absence of negative values at the input of eddy due to a non-diffeomorphic warp field arising from topup (plus motion / eddy current corrections). It’s possible that such extreme negative values are having deleterious effects in N4BiasFieldCorrection. So you could try running your DWI data through mrcalc - 0.0 -max before dwibiascorrect.

  2. The extreme values lie outside of the range of 16-bit signed / unsigned integers, whereas your non-problematic data do lie within this range. I would have expected that if N4BiasFieldCorrection had such a limitation then it would have been identified much earlier, but it is nevertheless worth trying e.g. dividing your DWI data by 3 and re-trying dwibiascorrect.

There was also an addition made in 3.0_RC3 to specifically dwibiascorrect ants (code here) that tries to prevent wild fluctuations in image intensities between subjects that could otherwise be introduced by N4BiasFieldCorrection. Sometimes in the process of correcting the bias field it will also globally upscale or downscale the image intensities; the purpose of this block of code is to undo that global scaling. It’s worth running dwibiascorrect with the -debug option on both problematic and non-problematic data and reporting back here. It’s possible that this specific process is multiplying your data by a negative number, which will make the brain look “weird” as you say.

Rob

1 Like