Dwibiascorrect : bias map visual quality check

While it’s slightly concerning, it’s also not entirely surprising given that those fields are being estimated from solitary b=0 volumes in each case. So we don’t know whether that variation is due to subject motion or due to the ill-posedness of bias field estimation from b=0 data alone, particularly without any averaging.

(Obligatory PSA to acquire more than one b=0 volume if possible)

You would need to interrogate this further, as the advice would change depending on the source of the variability: if inter-sequence subject motion genuinely leads to different bias fields (once the data are mutually aligned), then correcting them individually before combining would be preferable; whereas if this is variability in outcomes of the N4 method given the data provided to it, then the field estimated from the combined data may be more reliable than those estimated from the individual b=0 volumes, and therefore correcting afterwards may be preferable.

(More generally, it would be preferable if bias field estimation and correction were included in a large iterative loop correcting all unwanted DWI effects, and using more than just the b=0 data when possible; but we can only do what we can with the tools we have right now…)

If we assume that subject movement (among others) has influenced the bias field, would it be appropriate to retrospectively correct for head motion parameters (as nuisance regressors in the design_matrix.txt ) when performing fixelcfestats?

  • I would probably suggest having motion parameters as nuisance regressors even in the absence of such bias field difficulties. Running dwifslpreproc with one of the -eddyqc_* command-line options will have it internally execute EddyQC and provide scalar motion measures that can be fed right into a design matrix.

  • For your specific bias field issue, there’s no guarantee that summary subject motion parameters will correlate with the downstream effects. Specifically the estimate of motion between the second b=0 volume and the first may be useful. But as described above, it’s currently unclear whether or not the magnitude of the difference in the estimated bias fields will correlate with the magnitude of the motion of the subject between those two acquisitions.

  • If we instead generalise the question to:
    “Does the estimated & corrected bias field influence the outcomes of FBA?”
    , then there’s an opportunity to do something a bit more clever. I was going to wait until a coming publication to mention it, but the last time I employed that strategy I ended up being about 5 years later than intended…
    What you can do is take the (log of the) estimated bias field, transform it to template space, project the voxel-wise data to fixels, and use those data as fixel-wise nuisance regressors. That way, any statistically significant effects observed in the data would be “over and above that which can be explained by individual variability in bias field corrections”, as per the Freedman-Lane method. It’s a little bit involved, but the capabilities are already there in the software.


I also want to chat briefly about the effect of masking as observed in the data you’ve presented:

  1. If using the recommended dwibiascorrect ants, the estimated bias field is extrapolated beyond the mask that is utilised for the estimation of the field, and therefore performing the mrcalc -div operation would yield a smooth field without any voxels omitted. Can I therefore conclude that you’re using dwibiascorrect fsl? That algorithm explicitly zeroes any voxels outside of the mask, as FSL’s fast does not extrapolate the estimated field beyond the mask. I don’t actually know whether or not the variability in bias field estimates from FSL’s fast are more or less intrinsically variable than the estimates from ants, but given you’re fighting a variability problem, I would suggest looking into that as well.

  2. Your brain masks are looking pretty ordinary:

    1. I suspect that dwibiascorrect fsl would have greater concomitant effects from poor brain masking than would dwibiascorrect ants, because of the way that they work internally (FSL’s fast is explicitly looking for brain tissue types, so if there’s a whole lot of non-brain voxels included in the mask, this may throw the tissue intensity clustering off course).

    2. I’ve put some effort into DWI brain masking of late. I put out a call on the form earlier in the year looking for data where the current DWI brain masking algorithm fails; your data would be great to add to that database. There’s widespread changes to brain masking hopefully coming in version 3.1.0. If brain masking is causing pre-processing issues for you, you might want to experiment with some of the alternative algorithms provided in that changeset.

      (I’m also looking for anyone interested in collaborating on an evaluation of these algorithms, as I’m spread a little too thin right now)


Cheers
Rob