Poor registration with mrregister compared with ANTs

registration

#1

Hi everybody,

I’ve been trying to use mrregister to register some of my patient fod data onto the IIT HARDI template. I posted something similar a week ago, but I didn’t get any responses (Mrregister and the IIT HARDI template), yet I realize the answer is not obvious, and you are not the creators of the IIT HARDI template.
Since my last post, I’ve been trying a couple of things to figure this out, and I’ve also upgraded and redone some preprocessing with the new MRtrix version 3.0 (dwi2response, dwi2fod - thank you again! Version 3.0 dwi2response gives nan)

Here are a couple other results to show how weird this is getting.

Here is a figure of some registered images along with the IIT HARDI template. The cursor is on the same voxel on all of the images.
A. IIT HARDI template
B. A registered FA map onto the IITmean_FA template using ANTs and the cross-correlation metric for the non-linear registration
C. A registered FOD map onto the IIT HARDI template
D. A registered FA map onto the IITmean_FA template using mrregister
E. A registered FA map onto the IITmean_FA template using ANTs and the Squared Difference metric for the non-linear registration like in mrregister

Here are the commands that I used to perform the registrations in B-E
B.
antsRegistration --verbose 1 --dimensionality 3 --output [dti_FA_,dti_FA_Warped.nii.gz,dti_FA_InverseWarped] --interpolation Linear --winsorize-image-intensities [0.005,0.995] --initial-moving-transform [IITmean_FA_256.nii.gz,dti_FA.nii.gz,1] --transform Rigid[0.1] --metric MI[IITmean_FA_256.nii.gz,dti_FA.nii.gz,1,32,Regular,0.25] --convergence [1000x500x250x100,1e-6,10] --shrink-factors 8x4x2x1 --smoothing-sigmas 3x2x1x0vox --transform Affine[0.1] --metric MI[IITmean_FA_256.nii.gz,dti_FA.nii.gz,1,32,Regular,0.25] --convergence [1000x500x250x100,1e-6,10] --shrink-factors 8x4x2x1 --smoothing-sigmas 3x2x1x0vox --transform SyN[0.1,3,0] --metric CC[IITmean_FA_256.nii.gz,dti_FA.nii.gz,1,4] --convergence [100x70x50x20,1e-6,10] --shrink-factors 8x4x2x1 --smoothing-sigmas 3x2x1x0vox -x [IIT_mean_tensor_mask_256.nii.gz, dti_FA_mask.nii.gz] --float 0 --collapse-output-transforms 1 --use-histogram-matching 0
C.
mrregister fod_wm.nii.gz IIT_HARDI_256_lmax4.nii.gz -force -transformed fod_wm_Warped.nii.gz -nl_warp fod_wm_Warp.nii.gz fod_wm_InverseWarp.nii.gz -mask1 fod_wm_mask.nii.gz -mask2 IITmean_tensor_mask_256.nii.gz
D.
mrregister dti_FA.nii.gz IITmean_FA_256.nii.gz -type rigid_affine_nonlinear -force -transformed dti_FA_Warped_mrregister.nii.gz -mask1 dti_FA_mask.nii.gz -mask2 IITmean_tensor_mask_256.nii.gz -rigid_scale 0.125,0.25,0.5,1.0 -rigid_niter 1000,500,200,100 -affine_scale 0.125,0.25,0.5,1.0 -affine_niter 1000,500,250,100 -nl_scale 0.125,0.25,0.5,1.0 -nl_niter 100,70,50,20 -nl_grad_step 0.1
E.
antsRegistration --verbose 1 --dimensionality 3 --output dti_FA_SD_ --interpolation Linear --winsorize-image-intensities [0.005,0.995] --initial-moving-transform [IITmean_FA_256.nii.gz,dti_FA.nii.gz,1] --transform Rigid[0.1] --metric MeanSquares[IITmean_FA_256.nii.gz,dti_FA.nii.gz,1,NA,Regular,0.25] --convergence [1000x500x250x100,1e-6,10] --shrink-factors 8x4x2x1 --smoothing-sigmas 3x2x1x0vox --transform Affine[0.1] --metric MeanSquares[IITmean_FA_256.nii.gz,dti_FA.nii.gz,1,NA,Regular,0.25] --convergence [1000x500x250x100,1e-6,10] --shrink-factors 8x4x2x1 --smoothing-sigmas 3x2x1x0vox --transform SyN[0.1,3,0] --metric MeanSquares[IITmean_FA_256.nii.gz,dti_FA.nii.gz,1,NA,Regular,0.25] --convergence [100x70x50x20,1e-6,10] --shrink-factors 8x4x2x1 --smoothing-sigmas 3x2x1x0vox -x [IIT_mean_tensor_mask_256.nii.gz,dti_FA_mask.nii.gz] --float 0 --collapse-output-transforms 1 --use-histogram-matching 0

B represents the typical registration using antsRegistration.
C represents the typical registration using mrregister for fod images
D represents the typical registration using mrregister for FA images (and I also added an additional scaling level and changed the gradient step to see if it would be closer to the registration in B)
E represents the antsRegistration using the MeanSquares metric instead of the Mutual Information for the rigid+affine steps and the Cross-Correlation for the Non-Linear Steps.

What seems to be aligning my images best so far - that is, visually - is antsRegistration. It respects the size of my FA maps and really aligns them well with the IITmean_FA template. When registering the same subject’s FOD maps with mrregister, I am unable to get a registered FOD map that is of the same size as the template; however, the deep subcortical white matter structures are well aligned (corpus callosum, corticospinal tract, inferior fronto-occipital fasciculus, superior longitudinal fasciculus, etc.). Moreover, when I use mrregister to register the scalar FA maps, I get a pretty bad registration, and the final size of my image looks like my registered fod map with mrregister. Finally, when I use antsRegistration with the MeanSquares metric, I get something that approaches what mrregister does (although it’s not exactly the same).

As you can imagine, I am really stumped. I would really appreciate any help possible. Please let me know if you have any other suggestions. I would also be willing to send the preprocessed dwi images along with the gradient table if anyone would like to try seeing what they get.

Thanks again for all of your help,
Eric


#2

It seems to me that both ANTS with squared difference metric (figure E) and mrregister (figure D) behave similarly, suggesting the suboptimal registration is not due to mrregister, but due to the similarity metric. Unfortunately,
squared difference is the only similarity metric in mrregister if I’m correct.
This type of registration may go wrong if the intensities between both images are too different. Maybe it would help to scale the intensities of the images to a common range before registering them?


#3

Exactly, this will probably fully explain the results you’re seeing, @EricMoulton_ICM.

For a squared difference metric to perform correctly, you should only register images that have the same contrast and intensity range; i.e. only FA maps with FA maps, or FOD images with FOD images.

While this may help, it’ll still introduce weird biases if you’re doing anything beyond plain rigid registration. If the actual contrasts aren’t the same, e.g. the white matter will get squeezed together or stretched out in the one image versus the other (depending on the intensity profile, which is essentially the contrast). So for simple motion correction, and given contrasts that at least look similar and have been scaled to a similar range, this may still be sufficient (albeit risky nonetheless). There’s been talk/plans to incorporate other similarity metrics in mrregister at some point (e.g. cross-correlation and mutual information variants), but if that’s still the plan, it’s certainly still going to take a while…

@maxpietsch, @jdtournier, …? You’ve got any (near) future plans for such features?


#4

I’d second that, the mean squared metric is likely not suitable for cross-modality registration. I’ve managed to linearly (affine) register across modalities using combinations of manually fine-tuned contrasts to give similar contrast and intensity ranges but the results of nonlinear registration are unusable more often than not.

@EricMoulton_ICM If you want, you can send me a link with a subject’s FOD, tensor (dwi2tensor) and b0 (dwiextract) image and I can give it a try.

No near future plan but we definitely should… If anyone wants to take this on, I’d start in the mrreg branch.

Cheers,
Max


Cross-correlation metric as in Raffelt et al. 2011
#5

Hi everyone,

thanks for posting your thoughts. I’d like to clear something up, because I sense that I haven’t been clear. Every registration I have done has been within-modality (fod to fod template, fa to fa template, etc.). I haven’t tried to use mrregister, or ANTs for cross-modality registration, precisely due to the fact that the metric of mrregister is the squared difference metric.

That being said, I’d like to rebound on the comment by @tbilliet. Your comment about the squared difference metric got me thinking. I had actually done a registration on my subject’s FA map (to the IIT FA template) using FSL’s FLIRT and FNIRT which also uses the squared difference metric. The registration I got was rather close to that of ANTs-cross correlation metric (Figure B in my above post), meaning that it can’t simply be the metric, because I didn’t get a result like in Figure D or E. Since FNIRT worked, but mrregister didn’t, Could this be due to a optimization problems in either the subsampling, number of iterations, gradient descent, or some other parameter. The same parameters might have to be changed for my FOD files. Any thoughts on this?

@maxpietsch, I would really appreciate it if you gave the registration a shot, thank you. Here’s a Dropbox link with the necessary files. I’ve also included the template, the pre-processed dwi.nii.gz and the bvals and bvecs if you want to recalculate a response function, etc. I’ve also given my warped fod file of the white matter for comparison. Finally, I’d like to give a little disclaimer. This data is straight out of our emergency clinic, so it is normal that the top of the brain is cut off (the radiologists aren’t so concerned with this sometimes…). I’m hoping that using my brain mask will circumvent this issue. If its any consolation, I have patients with almost full coverage of the brain with whom I have the same problem.

Thank you again for your comments and insight.


#6

Hi Eric,

I’m not an expert on registration, but I think that Ants and FNIRT are based in different models this is the main reason to provide different results. Ants uses a symmetric parameterization of the shortest path of diffeomorphisms connecting two neuroanatomical configurations (a diffeomorfic transform, similar as mrregister) and FNIRT is based in a splines model. So in theory, mrregister should give you similar results to ants using means squares metric, I think the figure D and E are quite similar. There is a review of registration methods very nice here

Regards,

Manuel


#7

@EricMoulton_ICM I gave it a spin. Bottom line: I think your images can be roughly aligned but for better alignment, I think you’ll need to do some fine-tuning. The result belo is not ideal but hopefully gets you started.

The images

mrinfo template_fod.mif 
************************************************
Image:               "template_fod.mif"
************************************************
  Dimensions:        256 x 256 x 256 x 15
  Voxel size:        1 x 1 x 1 x 1
  Data strides:      [ 1 2 3 4 ]
  Format:            MRtrix
  Data type:         32 bit float (little endian)
  Intensity scaling: offset = 0, multiplier = 1
  Transform:                    1           0           0           0
                                0           1           0           0
                                0           0           1           0

mrinfo sub_fod.mif 
************************************************
Image:               "sub_fod.mif"
************************************************
  Dimensions:        256 x 256 x 44 x 15
  Voxel size:        1.0938 x 1.0938 x 3 x 1
  Data strides:      [ -1 2 3 4 ]
  Format:            MRtrix
  Data type:         32 bit float (little endian)
  Intensity scaling: offset = 0, multiplier = 1
  Transform:                    1           0           0      -143.5
                               -0           1           0      -96.83
                               -0           0           1      -59.68

affine registration

mrregister fod_sub.mif fod_template.mif -info -type affine -mask1 mask.mif -affine affine
mrtransform sub_fod.mif -linear affine  sub_fod_a.mif -template template_fod.mif
cat affine

#centre 61.8269839502482 77.71968852513824 63.5994907128613
0.988832220942192 -0.1361813446584312 -0.183502224997793 -92.14744718244954
0.04260321743823417 0.8566913214839063 -0.4320483135094917 -26.24398430748857
0.2500557102936811 0.4831384306240962 0.7718184703499078 -174.1902765935251
0 0 0 1

linear transformation

mrtransform sub_b0.mif -linear affine sub_b0_a.mif -template template_fod.mif
mrtransform sub_fod.mif -linear affine sub_fod_a.mif -template template_fod.mif

b=0 subject and template:


Phew, affine registration worked but large deformations are required… Also, there are some high intensity artefacts probably due to failure in distortion correction.

bring intensity of subject’s FOD image roughly to same range as that of the template

mrcalc sub_fod_a.mif 0.75 -mult sub_fod_as.mif
mrcalc sub_fod.mif 0.75 -mult sub_fod_s.mif

FOD subject (affine, multiplied by 0.75) and template:

nonlinear registration

We’ll need large deformations. Let’s start with lower spatial resolution than the default values but use more registration stages.

mrregister sub_fod_s.mif template_fod.mif -type  nonlinear -affine_init affine -nl_scale 0.1,0.2,0.3,0.4,0.5,0.6 -info -nl_lmax 2,2,2,2,2,2  -nl_warp_full warp.mif -transformed sub_fod_asn.mif

overlay is the template) without (yours) and after adjusting the intensity (sub_fod_asn.mif):


The brain size is roughly the same but the deformation is not ideal. For instance, the subject’s ventricles are not compressed enough. I’d start decreasing -nl_update_smooth -nl_disp_smooth until you get funny local distortions in your image. At that point, I think there is little we can do with the mean squared metric.

Let’s test whether the scale and lmax parameters or the intensity scaling are the cause for the better matching brain size:

mrregister sub_fod.mif template_fod.mif -type nonlinear -affine_init affine -nl_scale 0.1,0.2,0.3,0.4,0.5,0.6 -info -nl_lmax 2,2,2,2,2,2 -nl_warp_full warp_.mif -transformed sub_fod_an.mif

Worse than not scaled but better than default parameters (your image). So both.

Cheers,
Max


#8

@maxpietsch, this is really great! Thank you for taking the time to investigate this in such depth and writing such a clear response. It’s a huge leap from where I started, so I really can’t thank you enough. I will definitely play with the nl_updated_smooth and nl_disp_smooth parameters to see how far I can push this. I’m sure many others will benefit from this response as well.