Registration of structural and diffusion weighted data

Hi all

Thanks for providing so much information.
I’m trying to reproduce Kerstins above mentioned protocol for image registration.
Unfortunately I get an error message after the 2nd command:

Could not open matrix file tmp.nii.gz

I can run the command without the -init part and continue, but after mrtransform, I receive a non-registered and quite distorted t1 (see image at the end). Flirt registration from within the fsl gui works fine, but there I end up with such a low quality t1 image which is not useful at all :confused:
I tried it with skull stripped images as well - also not successful.
Does anyone have suggestions for a better registration?

I appreciate any help and I am sorry for my naive question.
Best regards, Lucius

Unfortunately I get an error message after the 2nd command:

Pretty sure tmp.nii.gz should be replaced with tmp.mat: This is the transformation estimate generated by the first command, that is then being fed back to flirt as the “initial” transform estimate via the -init option.

Flirt registration from within the fsl gui works fine, but there I end up with such a low quality t1 image which is not useful at all

This is because with default usage, flirt provides an output image that is both transformed and re-gridded to the target image. The trick is to have flirt output the transformation matrix, then apply that transformation in a way that alters only the image transformation matrix in the header, and doesn’t resample the image onto a new voxel grid. I really need to add this one to the FAQ…

Does anyone have suggestions for a better registration?

It’s hard to tell from a single-slice image, but one suggestion I will make is that if your T1 image contains a lot of neck, the image intensities in that region may contribute to calculation of the cost function that drives the registration; hence when flirt does its initial “coarse” search to find an approximate overlap, it could align the brain in the FA image with the middle of the neck in the T1. This is just a matter of understanding that registration algorithms are not magic black boxes, and sometimes need a bit of help; for instance, you could crop the neck out of the image, or if the images are intrinsically nearly-aligned you could instruct flirt to not perform the coarse search (where it e.g. tests to see if one image is rotated 90 degrees with respect to the other), and focus on fine improvement of the established near-algnment.


Dear Rob

Thank you very much for your helpful answer!
Yes, I’m certain, that this information provided in the FAQ will help a lot of users.

I’m in the process of implementing mrtrix at our neurosurgical department and use it as a more detailed, second opinion tool to Brainlab’s Elements program, to support op-planning and op-navigation.
If in the future the mrregister command is able to perform successful t1-dti/dwi registrations it would be much more comfortable and from our perspective even easier to integrate mrtrix in a clinical setting…

Thanks again, Lucius

Hi, Lucius,

Have you solved the mismatch problem yet? I meet the same problem when I do co-registration (register DTI image to T1 image of the same subject).
I firstly do recon-all with T1 image, it will transfer T1 image to Talairach space, I named it t1_freesurfer.nii.gz

flirt -in DTI.nii.gz -ref t1_freesurfer.nii.gz -out DTI_flirtto_t1_freesurfer_tmp.nii.gz -omat DTI_flirtto_t1_freesurfer_tmp.mat -dof 6

flirt -in DTI.nii.gz -ref t1_freesufer.nii.gz -out DTI_flirtto_t1_freesurfer.nii.gz -init DTI_flirtto_t1_freesurfer_tmp.mat -omat DTI_flirtto_t1_freesurfer.mat -dof 6

transformconvert DTI_flirtto_t1_freesurfer.mat DTI.nii.gz t1_freesurfer.nii.gz flirt_import DTI_flirtto_t1_freesurfer_mrtrixformat.txt

mrtransform DTI.nii.gz -linear DTI_flirtto_t1_freesurfer_mrtrixformat.txt DTI_in_t1_freesurfer.nii.gz

However, The result:

Then I tried to use epi_reg (a script designed to register EPI images (typically functional or diffusion) to structural (e.g. T1-weighted) images) :

It seems works, but the outpur file is extremely large( original DTI->61.8 MB , output DTI->519.4 MB ),which brings a lot of pressure when I do fiber tracking. So I have to give up this method.

Finally, I find dt_recon will register the current diffusion data to the structural data in its process. As announced by FreeSurfer, Input the bvalue and direction information using bvec and bval text files with the same format as the files used in FSL diffusion processing.

It works. Register DTI image to T1 image in(talairach ? MNI305) space. In fact, I’m not sure it is talairach or MNI 305 as FreeSurfer announced.

Before run dt_recon, I have to run recon-all command, which takes more than 10 hours with even my machine is quite nice (8 CPU cores and Titan Graphic Cards).

I sincerely hope Mrtrix developers can provide some functions on intra-subject registraion, registration from functional/diffusion image to structual image. It will really be very convenience for the users.

Many Thanks,

Hi Superclear,

Maybe try the skull-stripped version of the T1 image. It seems the frontal CSF part of the b0 image is registered to the skull here.
This post can also be of interest to you: Distortion correction using T1



1 Like

Hi @SuperClear,

yes, in the end it worked.
Are both your images 3d and not one 3d and the dwi 4d?
You can use the command provided by @maxpietsch here: ants reg.

You could use this to get get a 3d dwi before using above mentioned command:

ExtractSliceFromImage 4 anonym_b0.nii anonym_b0_volume0.nii.gz 3 0

then it should work.

With epi_reg I got as well huge data-sets, so I didn’t continue to use that.

For registration from t1 to mni152 i use the betted t1 (skullstrip via fsl bet) and register it with:

flirt [options] -in <inputvol> -ref <refvol> -omat <outputmatrix>

and use then the generated .mat file for other data sets from the same aquisition:
flirt [options] -in <inputvol> -ref <refvol> -applyxfm -init <matrix> -out <outputvol>

It seems to me that your images are not in talairach space, since for that, the anterior and posterior commissure should lie on a horizontal line.
But I think one of the experts should comment on those processes.
Still, I hope that helped a bit.

Best regards, Lucius

1 Like

Hi, Lucius,

Sweet thanks for the help!

I registered t1 to mni152 by using linear registration and non-linear registration.
I find it that non-linear registeration is better matched to MNI152 :

the register commands:
flirt -in t1_skulled.nii.gz -ref MNI152_T1_1mm_brain.nii.gz -omat t1_skulled_to_MNI152.mat
and use then the generated .mat as input of --aff in fnirt:
fnirt --in=t1_skulled.nii.gz --ref=MNI152_T1_1mm_brain.nii.gz --aff=t1_skulled_to_MNI152.mat --iout=t1_skulled_in_MNI152_fnirt_out.nii.gz

I will try ants registration later.

Thanks, Chaoqing

Hi, Thibo,

Thanks for your advice!

Yeah, you’re right, I didn’t do any distortion correction on diffusion data.
Here on the sagittal view of the registered DTI and T1 image, it is misplaced.

I will then try to correct the distortion as you suggested, and then do registration. Hope it will work.


1 Like

It seems works, but the output file is extremely large (original DTI->61.8 MB , output DTI->519.4 MB), which brings a lot of pressure when I do fiber tracking.

With epi_reg I got as well huge data-sets, so I didn’t continue to use that.

Sounds like the script is automatically re-gridding the input DWI to T1 image space; this is similar to the default operation of flirt. Unfortunately it doesn’t look like there’s a way to get around this: With flirt we can request the affine matrix and then apply it to just the header transformation, but epi_reg is non-linear. You could simply resample the resulting DWI series back to a lower resolution again, but you’d be performing two interpolations sequentially, which is generally not advised.

Hi Kerstin,

Thanks for the very informative steps you shared. I’m wondering why you would use mri/nu.mgz and not mri/T1.mgz instead? Or simply use brainmask.mgz which is already skill-stripped?

@Kerstin, @Michiko,

I am personally using norm.mgz, since it is a final FreeSurfer product of intensity normalization and skull stripping (which is not the case of the brainmask.mgz), before intensity filtering of subcortical structures ( which is brain.mgz). Are there reasons why not to use norm.mgz?


Hi @Antonin_Skoch,

Thanks for raising this point - do you know how much that step of intensity normalization would affect registration?

It would mainly depend on the cost function used in registration. Its effect will be probably minor. I would expect much more substantial effect on segmentation.

Hi Kerstin,

Thank you for this approach. I have been using this method with good success. I would also like to ask, if the same transformations if applied to the WM_bin image used in the bbr registration step, could be used as a mask to be supplied to the ACT? (Considering the fact that I don’t have the reverse phase encoded data).


Hi, Archith,

suggested approach is using 5ttgen fsl command on T1 structural image to generate proper input for ACT.
However, using distortion correction on DWI is highly recommended, since you would not get sufficient spatial correspondence between DWI and structural image, especially in frontal areas.


1 Like

Hi Antonin,

Thank you very much for the clarification.


why do you use the inverse tranform while not tranform the T1 image to DWI directly, would this be worse?

Hi @LiuYuchen,

When you use the bbr correction (note the -cost bbr in the flirt call) you need to have a WM mask in the reference image, this is why by using this metric you need to register the B0 to the T1w and then invert it. I hope this helps.

Best regards,


Could you please recommend a tracking pipeline for data without distortion correciton?

Could you please recommend a tracking pipeline for data without distortion correciton?

As far as tractography is concerned, this is just tracking without ACT, which is no different to what people were doing before ACT existed.

Given you asked about a pipeline, it depends on exactly what you are trying to achieve; there’s more than one possible experimental pipeline that utilises tractography. But if you’re talking specifically about connectome construction (reasonably likely given the thread is about co-localisation of anatomical information), historically I’ve found the process so ill-posed without ACT that I’ve not bothered putting the effort into trying to find something that even somewhat works. Doing a three-tissue decomposition would help a great deal, and you would need to use a much larger radial search minimum distance when assigning streamlines to parcels.

I’m regularly both asked and have done the asking when it comes to performing distortion correction in the absence of reversed phase-encoding data. Unfortunately I’ve both been underwhelmed by solutions I’ve tried and not received a sufficiently confident recommendation of a solution that I could verify and pass on.