Diffusion in MNI space

Dear MRtrix team,

I was hoping to run an analysis in MNI space and wanted to check what you think is the best approach to this using the mrtrix framework?

Thanks so much.



Hi Peter,

I’m not sure how much advice myself or anybody else can provide without more detail of exactly what you’re trying to achieve. Obviously you need to avoid divulging confidential information regarding novel aspects of the experiment you’re trying to perform, or the cohort you’re looking at, but hopefully you can still give us something to work off. E.g.:

  • Do you want to use MNI space purely to report localisation?
  • Are you using a parcellation defined in MNI space for e.g. a connectome analysis?
  • Are you defining WM pathways of interest in MNI space, and want to assess per-subject quantitative image values based on that segmentation?
  • Do you want to project per-subject results into MNI space in order to derive some form of group average?


Thanks Rob, I am keen to use a parcellation defined in MNI space for a connectome analysis.



OK; in that case the ‘generally accepted’ methodology would be, for each subject:

  • Register subject’s T1 image to the MNI template: mrregister
  • Invert this transformation: transformcalc
  • Transform the parcellation image to subject space, re-gridding onto the subject’s T1 image, using nearest-neighbour interpolation to preserve index values: mrtransform
  • Generate the connectome based on streamlines tractography and parcellation in subject space: tck2connectome
  • Perform all of your inter-subject comparisons using the connectome representation, rather than in image space

There are other ways to achieve a similar result, but this seems to be what people have converged on, and makes the most sense in my mind. People are more than welcome to recommend alternatives though! It does however assume that you’re able to correct EPI distortions in your DWIs, so that you can get accurate co-localisation of your DWI and T1 images, and therefore use the T1 to determine the subject->MNI transformation.


Hi Peter,
Just to follow up on what Rob wrote. You would need to be careful when registering T1 images to a MNI template with mrregister. Currently we only have a mean squared metric, which means that your images need to be roughly in the same intensity range for registration to work. We plan to add a normalised cross-correlation metric (or an intensity normalisation step) in the future. However for now you could just ANTS for this step.

If you do decide to use mrregister, you won’t need to invert the transformation since it outputs warps that go in both directions. transformcalc is designed for linear transforms.

Note that ANTS will also give you warps in both directions, so you can use the inverse warp to get the parcellation back to subject space.


Thanks Rob and Dave,

Can I check if I want to project per-subject results into MNI space how would I go about this?

Thanks for all your help.



Hi Peter,
Just to clarify, is this for a connectome study? Do you mean you want to transform MNI parcellations back to subject space for connectome analysis? I recommend you perform tracking and node assignment subject space, instead of template space.

Hi Dave,

My aim is to compare the connectome between groups in MNI space. If I perform the tractography in subject space is there a way to then project the connectome to MNI space?




Once you’ve generated a connectome for each subject, your data consists of a matrix of connectivity values per subject. This is not image data, so there’s no meaningful concept of ‘projecting’ this information to MNI space. As long as the anatomical parcels corresponding to each row and column in the matrix are equivalent between subjects, then this is the only correspondence between subjects that you require. If it’s some other type of information that you’re wanting to ‘normalize’ to template space (e.g. the spatial extent of the pathway corresponding to each edge), then you may need to reformulate your question with better specificity.


Thanks Rob, I am aiming to compare connectome properties with other imaging variables in MNI space.

Is there anyway of doing this without running the tractography in MNI space?



Assuming that you need spatial correspondence between those streamlines belonging to particular edges of the connectome and images in MNI space, I think the most general approach is to take the transformation derived to register the subject images to MNI space, and warp the streamlines themselves into MNI space. You are then free to generate whatever contrast you wish in MNI space based on those streamlines. The next best approach is to generate whatever image you are interested in in subject space, and then warp that image to MNI space; this however may not be wholly appropriate depending on precisely what images you are intending to warp, due to e.g. modulation of streamlines density.

You most likely don’t want to be repeating tractography in MNI space, because the tracking may behave significantly differently due to re-gridding. Technically you’d be obtaining an entirely new ‘connectome’ from tracking in that space, which would have different values for each edge, so direct comparison between image information and the results of that MNI-space tracking, but using values from subject-space tracking, would be questionable.

Generally the term “connectome properties” would be used to refer to properties that have been derived from the connectome matrix, which therefore exist in the space of that network graph rather than three-dimensional space. So I’m hoping that the above is what you’re actually looking for.


Thanks Robert, could you advise how I would go about warping the streamlines to MNI space and then constructing the connectome ?



OK, looks like we still haven’t added a tutorial for warping tracks for MRtrix3; this is definitely something that needs to be created at some point. Conceptually it’s the same process as that described in the MRtrix 0.2 documentation. Probably the best description right now is here; the particulars may change slightly depending on what software you’re using to do the registration step.

Note however that you typically wouldn’t perform this process for the sake of constructing the connectome. The more conventional approach for using a volumetric atlas in connectome construction is (since I can’t find a good example where I’ve explained it already):

  1. Register subject image to template
  2. Invert transformation
  3. Re-grid parcellation image in subject space based on inverse transformation
  4. Generate connectome based on streamlines and parcels both defined in subject space.



I have been trying to get my diffusion data/ tractogram into MNI space.

I have been following some of the posts on Mrtrix blog.

I have used this approach to get my tractogram in MNI space

  1. Reg T1 to dwi data
  2. use this T1 to obtain warpinit
  3. Use this T1 to register to MNI space obtain T12MNI.mat
  4. transformconvert T12MNI.mat T1.image dwi.mif flirt_import out.mrttrix
    5 mrtransform warp-[].nii -linear out.mrtix flirtout.mif -template MNI
    6 tcknormalise input.tck flirtout.mif trackorient.tck

But I am unable to correctly register my tractogram to MNI space.

I have also observed that I have to use fslreorient2std on my T1 image to correctly register my T1 image to MNI space. But the transformation matrix obtained with this revised T1 still doesn’t help me to register my tractogram to MNI space.

Could you please provide some suggestions on how can I correctly transform my tractogram into MNI space. I want to use this tractogram to perform volumetric operations and compare with standard MNI.

Kind regards.

I think what you need is a -inverse in step 5. Transforming streamlines is a different process from transforming images, and requires the opposite transform from what you’d use to transform images. See this thread for details.

Hi Rob,
Could you elaborate on how to do this option? :slight_smile: I am new with MRtrix and would like to delineate the arcuate fasciculus in 20 subjects in template space and then get FA-values for this tract in every subject.

Thank you very much for your help.


Hi Klara,

Firstly I need to clarify this bit

… would like to delineate the arcuate fasciculus in 20 subjects in template space…

Generally, if one is to somehow delineate / parcellate some structure in template space, this would yield a single definition of that structure in the template space. One would then either use that definition as a mask in order to sample values from subject images that have already been transformed into template space, or transform that definition into the native individual subject spaces and sample quantitative values there. Not sure whether or not this was a misunderstanding or just awkward wording, so thought I’d cover it off.

Beyond that though, it’s a slightly difficult question to answer, as the techniques / commands involved depend on the choice of how you want to go about it. For instance, one could simply manually draw a ROI corresponding to the arcuate fasciculus, and then use mrstats -output mean on the individual subject images transformed into template space; that’s literally all that would be involved. But there are various degrees of complexity beyond that, e.g. how do you segment the arcuate fasciculus, which again there is no single unambiguous answer.