Different atlas for connectome

Hi everyone,
I want to use MRtrix in order to create a connectome with number of streamlines connecting ROIs of one specific atlas.
Following the instructions (Structural connectome construction — MRtrix3 3.0 documentation) I created the first connectome using the default Freesurfer’s recon-all output as parcellized brain (aparc+aseg). However, I need to change the default options, creating a connectome setting a different atlas for GM segmentation and consequently for connectome nodes.
I tried for AAL atlas following different discussions (e.g. Problems with AAL parcellation - #2 by rsmith). These are the commands I run

flirt -in subj01_T1_bet.nii.gz -ref ROI_MNI_V4.nii -out T12AAL_nn.nii -omat T12AAL_nn.mat -interp nearestneighbour -dof 12
convert_xfm -inverse T12AAL_nn.mat -omat AAL2T1_nn.mat
flirt -in ROI_MNI_V4.nii -ref subj_T1_bet.nii.gz -interp nearestneighbour -applyxfm -init AAL2T1_nn.mat -out AAL2T1_nn.nii

However, the output does not fit with the original T1

Are the steps correct in general?
Then, what can I do for obtaining a good T1 parcellisation based on an atlas different from the Freesurfer’s one?

Maybe u need to try fnirt instead of flirt, see FNIRT/UserGuide - FslWiki (ox.ac.uk).

Hi Ken,
Thank you for your suggestion. However, I’ve already tried this solution and the output is deformed.

Don’t know how to proceed
Any suggestion would be helpful


You can use ANTs. This tutorial can help you:

As @Erom_Freitas said, I also recommend ANTS than FSL to perform this registration. If continue to use FSL, plz show your code, and the bet image.

Thank you so much for your replies @Erom_Freitas @Ken32g !
I’m not an ANTs expert
would it be correct to run this?
antsRegistrationSyNQuick.sh -d 3 -f ${sogg}_T1w_betted.nii.gz -m atlas.nii.gz -o ${sogg}_atlas2t1.nii.gz

Regarding FSL instead I tried to run directly
flirt -in atlas.nii -ref ${sogg}_T1w_betted.nii.gz -out atlas2t1.nii -omat atlas2t1.mat -interp nearestneighbour -dof 12

and the result was (that seems not so bad)

while the bet image is the following (applied -f 0.12)

What do you suggest?


The result for FLIRT registration seems not as good as you think. Look at the superior portion of the pons and the insular lobes.

This is the solution provided by chatGPT, but clearly enough to show my opinion (register the individual brain to the standard T1 template and apply the inversed deformation field for the AAL atlas image which is aligned with the template image).

And I found that ANTS maybe more efficient considering its support for multithreading.

Using FSL

For FSL, we’ll use flirt for linear registration followed by fnirt for non-linear registration.

First, perform the linear registration of T1-weighted image to the MNI152 T1 template:

# Linear registration using FLIRT
flirt -in T1w_image.nii -ref MNI152_T1_2mm.nii -out T1w_lin_reg.nii -omat T1w_to_MNI_linear.mat

Then, perform non-linear registration using the output matrix from flirt:

# Non-linear registration using FNIRT
fnirt --in=T1w_image.nii --aff=T1w_to_MNI_linear.mat --cout=T1w_to_MNI_nonlinear_coeff --config=T1_2_MNI152_2mm.cnf --ref=MNI152_T1_2mm.nii

To apply the inverse transformation to the AAL atlas and resample it to T1w space:

# Invert the warp
invwarp -w T1w_to_MNI_nonlinear_coeff.nii.gz -o MNI_to_T1w_nonlinear_coeff -r T1w_image.nii

# Apply the inverse warp to the AAL atlas
applywarp -i aal_atlas.nii -r T1w_image.nii -o aal_in_T1w_space.nii -w MNI_to_T1w_nonlinear_coeff.nii.gz 

Using ANTs

For ANTs, use antsRegistration for non-linear registration and antsApplyTransforms to apply the inverse transformation.

Register the T1-weighted image to the MNI152 template:

# Non-linear registration using ANTs
antsRegistrationSyN.sh -d 3 -f T1_2_MNI152_2mm.nii -m T1w_image.nii -o $output_prefix -n 12

Apply the inverse transformation to the AAL atlas, resampling it to the space of the T1w image:

# Apply the inverse transformation
antsApplyTransforms -d 3 -i aal_atlas.nii -r T1w_image.nii -o aal_in_T1w_space.nii -t [T1w_to_MNI_1InverseWarp.nii.gz, 1] -t T1w_to_MNI_0GenericAffine.mat

Further optimization is required, but feel free to give it a try.