High-Resolution Atlas Registration Failure (Bleeding) with FNIRT/MRtrix on HCP-style Data

Dear MRtrix Community and Experts,

I am running a structural connectome pipeline (CSD/ACT/SIFT2) on HCP-preprocessed data (1.5mm T1w) and am facing persistent, critical issues with the non-linear registration quality when mapping the DKT308 atlas to the individual subject space.

My goal is to obtain a highly accurate parcellation (DKT308_parc.mif) that perfectly aligns with the gray matter (GM) surface, as required for ACT.

1. Pipeline Details

  • DWI/T1 Data: HCP-style preprocessed, T1w resolution is 1.5mm.

  • Target Atlas: DKT308 (a volume atlas, correctly in MNI space).

  • MRtrix Core: 5ttgen fsl, dwi2fod msmt_csd, tckgen, tcksift2, tck2connectome.

  • Registration Tools: FSL’s flirt and fnirt.

2. The Critical Problem (Diagnosis: Over-Regularisation)

Despite extensive tuning, the final transformed atlas (DKT308_parc.mif) consistently exhibits significant bleeding (invasion) into the white matter (WM) in high-curvature sulcal regions.

I diagnosed this as over-regularization in the FNIRT process (the algorithm prioritizes smoothness over anatomical accuracy).

3. Steps Taken to Resolve (All Failed to Fix)

I have attempted the highest precision settings possible within the volume-based FNIRT framework:

  1. Low Regularization: Modified the T1_2_MNI152_2mm.cnf to drastically lower the \\lambda schedule (e.g., final values from 30 to 5) and increased iterations (e.g., miter to 20). Result: Bleeding was only marginally reduced.

  2. MNI Target Resolution: Tested mapping the 1.5mm T1 to both 2mm MNI and 1mm MNI templates. Result: Precision remained poor due to inherent volume-warping limitations.

  3. Registration Input: Used the full (un-brain-extracted) T1w image for FLIRT/FNIRT input (--in) while using the brain mask (--inmask) for constraint. Result: Improved initial alignment, but did not fix GM/WM boundary bleeding

Visual Evidence

(Please insert the image showing the atlas bleeding into the white matter here)

5. Request for Expert Advice

Given that volume-based FNIRT seems inadequate for this high-precision task, I am considering moving to a surface-based approach.

My questions for the community are:

  1. Tool Recommendation: What is the most robust and recommended non-FSL method to generate a high-quality, non-linear warp field that accurately maps a volume atlas (like DKT) onto the individual’s GM/WM interface for ACT? (e.g., Is ANTs/SyN a better volume-based solution, or is the effort better spent elsewhere?)

  2. Surface Solution: Since HCP data is usually processed with FreeSurfer, is the consensus that I must abandon volume-based registration for the atlas and instead use mri_surf2surf (FreeSurfer) combined with mri_aparc2aseg to create the final, accurate volume atlas (DKT308_parc.mif)?

Thank you in advance for any insights on how to proceed with the highest quality registration for ACT.

Best regards,

RuiWang

Hi RuiWang,

I would suggest avoiding referring to these data as “HCP-preprocessed”: while you may have used a pre-processing pipeline that corresponds to what was used in the HCP data, these are not “the HCP minimally preprocssed” data that people are highly accustomed to speaking about.

Further, if you had genuinely performed an HCP-style pre-processing pipeline, then I would have expected that one of the derivatives produced by that pipeline would be the transformation of subject iamge data to MNI space, in which case you could simply take that pre-computed warp and apply its inverse to your parcellation data.

I personally also would not describe the results you have shown as the parcels “bleeding into the white matter”. From reading that expression alone, I had expected alignment that was generally pretty good but maybe the parcels weren’t constrained exclusively to the grey matter but crept a bit into the white matter as well. Some volumetric parcellations do this intrinsically, and it’s not a problem for many pipelines (indeed it can somewhat simplify connectome construction if you don’t have to worry about streamlines terminating at the interface between GM and WM and not quite reaching your GM-constrained parcels). But what you’ve shown here is severe gross misalignment.

One detail you have omitted is the process by which you are producing the transformed images. The most fundamental output from an image registration process is a transformation that defines movement in space, whether a linear affine transform or a non-linear warp field. For those data to be correctly applied requires that the interpretation of the data by the software reading and using them be consistent with the interpretation of the data by the software that produced it; eg. what is defined as the origin, what are the three orthogonal reference directions against which movement is defined, deformations vs. displacement. Disagreements in these won’t necessarily yield software errors, instead the result of the data transformation will be erroneous. In this respect FSL and MRtrix3 have known discrepancies. I can envisage one way to produce results this erroneous would be to take transforms produced by FSL and apply them to image data using an MRtrix3 command without suitably converting the data from one convention to the other. Eg. transformconvert can convert an affine transformation matrix produced by FSL FIRST into the convention utilised by MRtrix3. For FNIRT I’m not sure that we’ve actually looked at how they encode their non-linear transformations. If you were to instead transform the parcellation data using the same software package that was used to compute the transformations, you are likely to obviate this confound. In addition, with registration software it is common to provide the capability to generate as an output of the registration command the transformed version of the input image. If this result has good anatomical alignment, but when you take the computed transformation and manually apply it to your image data there is gross misalignment, then you would know that the error is not ith the registration but with your interpretation of the resulting transform data.

The other mistake that could conceivably yield the kind of results that you are showing would be providing parcel images rather than anatomical images to the registration algorithm. While these may be the data to which you wish to apply an accurate transformation, they are unsuitable for the computation of that transformation, as the similarity metric that quantitatively evaluates the quality of image alignment would not be minimised when subjective alignment is achieved.

For volumetric registration, ANTs remains highly regarded. Surface-based registration is always appealing, though I don’t have a lot of experience with the projection of custom parcellations using surface-based tools.

Regards
Rob