I am working on macaque data.
I have T1 image + segmentation image from an atlas I want to use.
Also, I have perfectly aligned DWI and T1 from subjects which I am working on.
I want to construct connectome according to that atlas.
With flirt + mrregister -type nonlinear I get perfect alignment between T1 atlas image and mine T1.
Using the same warp Ii aligned segmentation (nodes.mif) file which doesnt turn up good when I use -datatype uint32 (see the image 3.), but with other -datatypes (for example int32 - see the image 2.) I can get the image but nodes are not well aligned when I load them as a node image. Although when I overlay int32 nodes.mif image to subject’s T1 they fit perfectly (image 4).
After flirt and transformconvert everything looks good (image 1.).
Are there any solutions for nonlinear registering of nodes.mif?
I guess my other option is to register DWI and T1 to the atlas T1.
However, I would like to avoid registering and moving DWI.
3rd possible option is tcktransform. But this warp I made corregistering 2 T1 (atlas and subject one) doesn’t work since there is a difference in resolution between subjects T1 and DWI. So I extracted b0 and did -mean from DWI and corregistered to T1 atlas, but that also didn’t turn out well.
Your explanation is slightly difficult to follow, but I’ll raise two issues that hopefully combined will allow you to resolve the issue:
Parcellation images should never be registered; they should only ever be transformed. These images do not contain appropriate contrast to be driving the image alignment process. Instead, the registration should be done using the subject’s T1-weighted image and the T1-weighted template image, and then the transformation that was estimated from that process should be applied to the parcellation image.
When transforming parcellation images, it is necessary to use nearest neighbour interpolation. A voxel that lies on the border between node 2 and node 4 should not obtain an image intensity of 3. Performing the transformation and then converting to an integer data type is not sufficient here: the resampling process itself must preserve the integer nature of the data.
This is also why your third image highlights the outer edges of parcels. During the resampling process, cubic interpolation is being used. At interfaces between labelled and unlabelled nodes (where voxels contain an intensity of zero), due to overshoot of the cubic fit, resampled voxel intensities may obtain a negative value. When this is mapped to an unsigned integer type, integer overflow leads to such voxels containing intensities near the maximum possible integer value for a given byte width (~ 4 billion for 32-bit). When loaded in
mrview, the intensity windowing is expanded such that these voxels are white, and the small variance in intensity values between voxels belonging to different parcels is imperceptible.
Nearest neighbor interpolation was what I needed.
Thanks for the explanations! Very much appreciated.