Use of volumetric templates & parcellation atlases

When seeking to use an atlas parcellation that is defined in the space of a volumetric template image (as opposed to e.g. the surface-based parcellations and surrounding commands provided by FreeSurfer, decisions must be made as to both how to obtain spatial correspondence between subject data and the template, as well as how data should be spatially transformed so as to enable handling of both sources of data within a single congruent space.

Here, I speak specifically about processing of single-subject data; however similar advice most likely holds for if you happen to have a population-specific template and wish to incorporate atlas information that is defined in the space of some other template.

When it comes to handling individual-specific diffusion image data & especially tractography, it is highly recommended that processing occur within the space of the acquired data for that individual. This is even more relevant if e.g. a high-resolution anatomical image is to additionally be utilised; e.g. for ACT. If desired, quantities derived in individual subject space can instead be later transformed into template space.

For volumetric parcellations, the recommended workflow is to perform registration between the individual subject image data and the template image, and then use the estimated transformation to project the parcellation data associated with the template back into the individual subject space. So the relevant steps are:

  • Perform registration from subject image of interest to the target template image;

  • Obtain the inverse warp, i.e. that which transforms data spatially from the template image space to that of the individual subject;

  • Apply this transformation to the parcellation image, resampling the data onto a new image grid in the space of the individual subject; if the parcellation image is defined using integer index labels, then nearest-neighbour interpolation must be used in order to preserve the values of those labels (e.g. a voxel that lies in between parcel 2 and parcel 4 should not take a value of 3 following transformation). If possible, resampling the parcel data onto a higher rather than lower spatial resolution voxel grid will reduce the magnitude of influence of the inadequacies associated with nearest-neighbour interpolation.

Since this registration is not always intra-modal, and raw image intensities may vary significantly between the subject data and the template, the MRtrix3 command mrregister is not yet recommended in this use case.

While many experiments have been published using a 12 degrees of freedom affine registration between subject and template data, in my own (admittedly limited) personal experience I found this to never be really acceptable.

I only have experience with two softwares for performing non-linear registration - ANTs and FSL - and in both cases I only have experience with using a T1-weighted image. External contributions from those with more experience with registration of data of other modalities / use of these and/or other softwares are invited.

  • ANTs:

    Here I have only used & had success with the registration parameters that are used in the well-cited Klein et al., NeuroImage 2009 manuscript. The registration is intrinsically symmetric, so the inverse non-linear transformation is automatically provided as an output of the first command. I then found it simpler to use another ANTs command to transform the parcellation data rather than converting the registration outputs into a form compatible with MRtrix3’s mrtransform command.

    $ ANTS 3 -m PR[template_image.nii, subject_image.nii, 1, 2] -o ANTS -r Gauss[2,0] -t SyN[0.5] -i 30x99x11 --use-Histogram-Matching
    $ WarpImageMultiTransform 3 parcellation_in_template_space.nii parcellation_in_subject_space.nii -R subject_T1.nii -i ANTSAffine.txt ANTSInverseWarp.nii --use-NN
    
  • FSL:

    Personally I needed to jump through some hoops to get FSL registration to produce results that I was content with. Again, others who have more experience with this may with to propose alternatives.

    1. Initial affine registration

      $ mrcalc subject_T1.nii subject_T1_brainmask.nii -mult subject_T1_masked.nii
      $ mrcalc template_image.nii template_brainmask.nii -mult template_image_masked.nii
      $ flirt -ref template_image_masked.nii -in subject_T1_masked.nii -omat T1_to_template.mat -dof 12
      

      Here I use brain-extracted versions of both the subject and template images.

    2. Non-linear registration

      $ maskfilter subject_T1_brainmask.nii dilate - -npass 3 | mrconvert - subject_T1_brainmask_dilated.nii -strides -1,+2,+3
      $ maskfilter template_mask.nii dilate - -npass 3 | mrconvert - template_mask_dilated.nii -strides -1,+2,+3
      $ fnirt --config=T1_2_MNI152_2mm.cnf --ref=template_image.nii --in=subject_T1.nii --aff=T1_to_template.mat --refmask=template_mask_dilated.nii --inmask=subject_T1_brainmask_dilated.nii --cout=T1_to_template_warpcoef.nii
      

      Personally here I observed improved results by providing dilated versions of the masks of both individual subject and template image T1-weighted images: this provided a balance between avoiding edge-related effects in driving registration at the periphery of the brain, and preventing differences in image details between individual-specific and blurred template images outside of the brain from driving the registration in ways that may affect alignment of the periphery of the brain due to regularisation of the non-linear warp field.

    3. Invert the non-linear warp field

      $ invwarp --ref=subject_T1.nii --warp=T1_to_template_warpcoef.nii --out=template_to_T1_warpcoef.nii
      
    4. Apply the inverse transformation to the parcellation image

      $ applywarp --ref=T1_registered.nii --in=parcellation_in_template_space.nii --warp=template_to_T1_warpcoef.nii --out=parcellation_in_subject_space.nii --interp=nn
      
6 Likes