MRtrix tutorial available on OSF

Hi @tkimura,

the problem here is indeed the latest version of FSL (which I guess you have) expects a single 3D volume as a fieldmap (the old version would only read the first volume if the fieldmap was 4D; for another discussion on this see https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=FSL;1e8be9c7.1811). One easy way around this is to split the 4D 5tt_nocoreg.nii.gz into 3D volumes. This can be achieved, e.g. with fslsplit:

fslsplit 5tt_nocoreg.nii.gz myprefix -t

This command will split the 5tt_nocoreg-image into 5 3D volumes, where “prefix” denotes the prefix of each of the volumes (like this, your outputs will be called “prefix0000.nii.gz” for the first 3D volume of 5tt_nocoreg, “prefix0001.nii.gz” for the second volume, etc.). Now you can use either of those 3D images with flirt, e.g.

flirt -in mean_b0_preprocessed.nii.gz -ref prefix0000.nii.gz -interp nearestneighbour -dof 6 -omat diff2struct_fsl.mat

I now this is a little complicated, but the best way I can think of right now. Let me know if you find a quicker way. However, I just ran these commands myself and can confirm that they work and that the resulting diff2struct_fsl.mat is correct.

Hope this helps.

Cheers,
Marlene.

4 Likes

Hi Marlene,

Thanks for your helpful comment. Now command flirt works without error. But I checked the results and found that 5tt_noncoreg.mif has wrong direction (axial image has changed to saggital). As maybe I did something wrong, I will try to fix and let you know.

Takashi

1 Like

You definitely don’t want to be using the 5 separate tissue partial volume images for registration: apart from the fact that the registration algorithm may not be able to find a good alignment between an individual tissue image and a template T1-weighted image based on its internal image matching cost function, resulting in a bad alignment, doing this five separate times would give you five tissue images with five different transforms!

You want to perform the registration using the most appropriate images for such (e.g. T1-weighted image & mean b=0 image), and then apply the transformation that was optimised by the registration algorithm to any other images that you wish to undergo the same transformation. I.e. If your 5TT image is intrinsically aligned with the T1 image data (given it was calculated from such), and you then estimate a transformation that aligns the T1 image with the DWI data, then applying that same transformation to the 5TT image will make those data also align with the DWIs.

1 Like

Hi Marlene,

Excellent tutorial, I didn’t realize MRTrix had so many capabilities!

A few quick questions:

  1. Does MRTrix have any graphical interface for drawing ROIs manually?

  2. When tracking, you showed the use of anatomical constraints to ensure streamlines start and end at the WM/GM interface. Then used cortical (or sub-cortical) labels for connectome generation. What happens if the streamlines stop just short of labels (for example if my labels don’t exactly align or match the WM/GM boundary)? Will they not be counted in the connectome generation?

  3. Does MRTrix have any capabilities for connectome generations using meshes as opposed to labels? For example mesh of WM/GM interface or mid-cortex mesh. Similarly, can meshes be used, for use with anatomical constrained tractography, or should this be converted to binary masks?

Thank you very much,
Kurt

Does MRTrix have any graphical interface for drawing ROIs manually?

In mrview, click the “Tools” button on the toolbar, then select “ROI tool”. You can open existing ROIs or create new ones, and activate editing by clicking the “Edit” button within that tool.

What happens if the streamlines stop just short of labels (for example if my labels don’t exactly align or match the WM/GM boundary)? Will they not be counted in the connectome generation?

tck2connectome by default will search for any labelled voxel within a 2mm radius of the streamline termination point, precisely for this reason.

Does MRTrix have any capabilities for connectome generations using meshes as opposed to labels?

Not yet. But if the parcels are relatively large, then the difference in outcome between using a native mesh representation and using a voxel-based representation of such should be minimal.

Similarly, can meshes be used, for use with anatomical constrained tractography, or should this be converted to binary masks?

Again, not yet. But ACT does not require the tissues to be defined as binary masks: a partial volume fraction representation means that complex shapes of tissue interface surfaces can be represented without too much loss of precision.

Hi @schilkg1,

While it might be inappropriate to have a lengthy discussion within this post, I do wish to clarify some points respecting your 2nd and 3rd questions. Please see below.

Good question. We did have spotted on the misalignment issue that you mentioned here; indeed, only ~25% of streamlines will be used in the connectome largely because of such misalignment. We describe it as a source data mismatch problem: the WM/GM boundary derived from tissue partial volume maps (based on tissue segmentation) used in tractography is not always consistent with the boundary of brain parcellation images used in connectome construction. Performing a 2-mm search from streamline endpoints (to find the nearest labelled voxel ) is the current MRtrix3’s default mechanism implemented in tck2connectome to deal with such inconsistencies (see the -assignment_radial_search in the help page of tck2connectome). Using the radial search, ~85% of streamlines will contribute to the outcome connectome. If interested, please see this abstract for the relevant influences.

To avoid losing precision, my approach to deal with the source data mismatch is directly using high-resolution tissue surface meshes to unify the source data: the same labelled surfaces are used for both tractography and connectome generation, as described in your question. I called this method Mesh-based ACT (or MACT), where streamline endpoints are the intersection points between streamlines and the labelled surfaces (cortical and sub-cortical WM/GM surfaces). When using MACT for connectome generation, there is no need to convert, for example, the vertex-wise labels of HCP-MMP into volume-based labelled images.

For more details, please see MACT abstract for reference. If you registered the ISMRM2017 meeting, I think my presentation video (0058) should still be accessible, there are more explanations on the background and the principle of MACT.

Finally, about using a mid-cortex mesh, I believe that it would only require a simple change in MACT. Coincidentally, I did once consider using a mid-cortex mesh as a default setting (rather than taking both inner and outer cortical GM surfaces). If you or anyone see any potential benefits or applications of using mid-cortex, I’d love to hear your insights and recommendations. So, even though MACT is not yet available in MRtrix3, it has already and still received serious development by me and has been applied to a few studies. Any queries about MACT are very much welcome; please let me know via a personal message or an email.

Many thanks.

1 Like

Hi rsmith, I am having the same issue as tkimura I think.

I am hoping someone can help me to operationalise the following instructions you provided?

If your 5TT image is intrinsically aligned with the T1 image data (given it was calculated from such), and you then estimate a transformation that aligns the T1 image with the DWI data, then applying that same transformation to the 5TT image will make those data also align with the DWIs

Sorry but I’m a bit of a novice.

Thank you, Dan

okay, I have run the command again using the myprefix0000.nii.gz rather than the myprefix0001.nii.gz file and it worked. I’m not sure why this is the case though? If anyone has the time to reply that would be great. Kind regards, Daniel123

Hi Daniel,
the reason why it works with the myprefix0000.nii.gz is that this is the 3D volume where the cortical gray matter boundaries are specified (see upper image). The myprefix0001.nii.gz defines the subcortical gray matter boundaries (see lower image). If we continue this, the myprefix0002.nii.gz, will specify the white matter regions, and so on). I guess by looking at the pictures it should be clear why using the myprefix0001 does not work for a proper registration.


However, as @rsmith suggested above, the better idea might be to use the T1 as a reference image all along.

2 Likes

I am hoping someone can help me to operationalise the following instructions you provided?

If your 5TT image is intrinsically aligned with the T1 image data (given it was calculated from such), and you then estimate a transformation that aligns the T1 image with the DWI data, then applying that same transformation to the 5TT image will make those data also align with the DWIs

This is me providing a precise description such that anyone who does the wrong thing is technically disobeying my instructions, while minimising length because time is finite; I do that a lot. I’ll try to contextualise why I put it this way:

Consider doing your processing steps in this order:

  1. Generate a 5TT image from the T1.

  2. Perform registration between DWI and T1 data to estimate the spatial transformation that gets them aligned.

  3. Transform T1 data in space in order to get it aligned with the DWI data.

  4. Perform ACT using DWI data and the 5TT image.

The issue is that the DWI data and the 5TT image still don’t align with one another; so ACT will appear to be doing completely the wrong thing.

Upon generating a 5TT segmentation image from a T1-weighted image, those two images intrinsically align with one another in space, since one was generated directly from the other.

The reason I worded the quoted text in that way is because in this scenario, step 3 needs to be applied to the 5TT image also, in order to transform that information in space to align with the DWI data. However, step 2 should not be performed separately for the 5TT image. There is a single unique spatial transformation that is required to achieve alignment between DWI (and derived) data and T1-weighted (and derived) data, and that transformation (as estimated through registration) should be done using the image information most appropriate for that task (that’s the T1-weighted image, not the 5TT image). Once that transformation is estimated, it can be applied to any image for which undergoing that transformation is deemed appropriate.

2 Likes

Hi experts,

When I follow the appendix of the tutorial to prepare parcellation image for structural connectivity analysis, I encountered one problem.

The tutorial said two annotation files of the HCP MMP 1.0 atlas named “lh/rh.hcpmmp1.annot” should be copied under path: $SUBJECTS_DIR/fsaverage/label/, but why at the 3rd step the command, the file is “$SUBJECTS_DIR/fsaverage/label/lh.glasser.annot”, I really don’t know how I can generate the “lh.glasser.annot” files.

Thanks.

Huan

Hi Huan!

Thanks for paying attention. The file name of the command should of course read “lh/rh.hcpmmp1.annot”. I’m referring to the exact same files, but accidentally was inconsistent with the naming. So you can just copy the rh/lh.hcpmmp1.annot files into the $SUBJECTS_DIR and the use the command with lh/rh.hcpmmp1.annot instead of lh/rh.glasser.annot.
Sorry for causing this confusion :wink:

Marlene

I have two questions:

  1. Is there any difference of the 5TT image: one is directly generated from the T1 (registrated to B0 firstly), and the other is generated from T1(not registrated to B0) and then registrated to B0 by the transformation matrix of T1 to B0? And If there is difference, which step is recommended?
  2. Should I bet T1 and b0 image before the registration?

Hi Marlene,

First, thank you very much for a great tutorial. I have processed connectivity analysis using HCP surface atlas and I want to generate Fornix tract (connectome2tck). Do you have any idea which nodes are connected through this tract? I have looked for the literature and it connects Hippocampus and other region but I don’t know what is the other region?

Thanks,
Cmid

Hi Cmid,

the fornix connects parts of the hippocampus to the mammillary bodies. The hippocampus is already included in the HCPMMP1-parcellation in the tutorial. However, the mammilary bodies are not. There is a very nice subcortical atlas provided by Ewert et al. (2017, Neuroimage), which provides very fine-grained parcellations of many subcortical structures, including the mammilary bodies. This atlas can be downloaded from here: https://www.lead-dbs.org/helpsupport/knowledge-base/atlasesresources/atlases/. Just identify the mammilary bodies there and register them to your diffusion image; then track between those two nodes.

Alternatively, you could also track the fornix by using tckedit with a ROI through which you know the fornix passes. I’m providing an example below (based on the tutorial data). In the first image, I loaded the sift_1mio.tck onto the T1_coreg.mif. There I found that the fornix passes through the coordinates 0.31,11.76,37.32. Then I used tckedit as follows

tckedit -include 0.31,11.76,37.32,3 sift_1mio.tck fornix.tck

to obtain the second image. Admittedly, this is just a quick & dirty approach to identify the fornix, but depending on your question, this might be enough. In any case, I would go for the Ewert atlas also.

Cheers,
Marlene

Hey @martahedl,
hey @rsmith,

thanks for the great tutorial.
To register to the T1 image instead of the 5tt I adapted the commands to:

flirt -in mean_b0_preprocessed.nii.gz -ref T1_raw.nii.gz -interp nearestneighbour -dof 6 -omat diff2struct_fsl.mat

transformconvert diff2struct_fsl.mat T1_raw.nii.gz mean_b0_preprocessed.nii.gz flirt_import diff2struct_mrtrix.txt

mrtransform 5tt_nocoreg.mif -linear diff2struct_mrtrix.txt -inverse 5tt_coreg.mif -flip 0

This is the original code from the tutorial:

Is my adaption correct?

For some reason I had to include the flip argument to get the same orientation as the other images. Do you know why or if I made a mistake?

Thanks for your help!

Best Regards,
Darius

Hi @tkimura @Darius @martahedl,

As @rsmith mentioned, we should use the T1 for registration and not any of the 5tt outputs. I’m going through @martahedl’s excellent tutorial myself, and altered that portion of the pipeline as follows to get a good registration (using a T1_BET and a mean_bzero_BET).

#Create T1_raw.mif
mrconvert ../T1/ T1_raw.mif

#Run 5ttgen with -nocleanup option (so you have the T1_BET.nii.gz for registration)
5ttgen fsl T1_raw.mif 5tt_nocoreg.mif -nocleanup

#Create mean bzero image and BET it
#Visually check BET accuracy and tweak BET command if needed.
dwiextract dwi_den_unr_preproc_unbiased.mif - -bzero | mrmath - mean mean_b0_preprocessed.mif -axis 3
mrconvert mean_b0_preprocessed.mif mean_b0_preprocessed.nii.gz
bet mean_b0_preprocessed.nii.gz mean_b0_preprocessed_BET.nii.gz -c 47 49 30 -f 0.4

#Register mean_b0_preprocessed_BET with T1_BET (found inside 5ttgen temp folder)
#Visually check accuracy and tweak flirt command if needed.
flirt -in mean_b0_preprocessed_BET.nii.gz -ref 5ttgen-tmp-*/T1_BET.nii.gz -interp nearestneighbour -dof 6 -omat diff2struct_fsl.mat -out mean_b0_preprocessed_BET_regFSL.nii.gz -finesearch 5

#Convert FSL transform to MRTRIX transform
transformconvert diff2struct_fsl.mat mean_b0_preprocessed_BET.nii.gz 5ttgen-tmp-*/T1_BET.nii.gz flirt_import diff2struct_mrtrix.txt

#Apply (inverse) transform to the 5ttgen data
mrtransform 5tt_nocoreg.mif -linear diff2struct_mrtrix.txt -inverse 5tt_coreg.mif
2 Likes

For some reason I had to include the flip argument to get the same orientation as the other images.

I would have thought that transformconvert flirt_import would not need any additional manual manipulation like this. But there’s a whole history of dealing with axis definitions for FSL / NIfTI compatibility. Did you happen to perform an explicit -strides +1,+2,+3 or -strides -1,+2,+3 when producing the NIfTIs for FSL? If you find that including these fixes the issue, but omitting them leaves the remaining erroneous axis flip, we might be missing a step in the flirt_import processing.

Just a small tweak to what I wrote above: I get better bzero to T1w registration (as one would expect) using the boundary-based registration available in FLIRT. FSL offers the epi_reg script for this, and it performs well even without having to BET the bzero image.

epi_reg --epi=mean_b0_preprocessed.nii.gz --t1=T1.nii.gz --t1brain=T1_BET.nii.gz --wmseg=T1_BET_pve_2.nii.gz --out=diff2struct_fsl

Hi everyone,

I’m facing some problems similar to those of @tkimura and @Daniel123 in the previous messages. I followed all the commands of step 4.1.1 in the tutorial (see below), but when I mrview the results 5tt_coreg.mif turned out to be flipped to a saggital axis.

dwiextract dwi_den_unr_preproc_unbiased.mif - -bzero | mrmath – mean mean_b0_preprocessed.mif –axis 3
mrconvert mean_b0_preprocessed.mif mean_b0_preprocessed.nii.gz
mrconvert 5tt_nocoreg.mif 5tt_nocoreg.nii.gz
flirt –in mean_b0_preprocessed.nii.gz –ref 5tt_nocoreg.nii.gz –interp nearestneighbour –dof 6 –omat diff2struct_fsl.mat
transformconvert diff2struct_fsl.mat mean_b0_preprocessed.nii.gz 5tt_nocoreg.nii.gz flirt_import diff2struct_mrtrix.txt
mrtransform 5tt_nocoreg.mif –linear diff2struct_mrtrix.txt –inverse 5tt_coreg.mif

mrview dwi_den_unr_preproc_unbiased.mif –overlay.load 5tt_nocoreg.mif –overlay.colourmap 2 -overlay.load 5tt_coreg.mif –overlay.colourmap 1  

I do not really understand previous mentioned directions (sorry, I’m far from experienced). Hope there is anyone who could help me to solve this problem.

Regards,
Rianne