Dilating of brain mask and some other general queries

Dear MRtrix Community,

I was wondering if it is always necessary to perform mask dilation as mentioned/advised in a couple of posts (e.g. Beginner: Connectome pipeline). I previously followed the HCP tutorial which simply utilized preprocessed data output, seemingly without the step, so I am curious how necessary this would be.

In addition, I have the following general queries:

  1. I understand some older papers utilizing tractography would employ a method of dilating the Freesurfer parcellated cortical ribbon in dMRI space by X mm in order to penetrate the superficial WM and reach the deep WM in order to initiate fiber tracking. With the current tools such as ACT, would such a step still be advised?
  2. Do you reckon that there will be any updates to the steps recommended for structural connectome type analyses (i.e. in the HCP tutorial) since it was part of a 2015 conference? I am planning to apply these steps to other open source preprocessed data such as the UK Biobank which also provide multishell HARDI diffusion data and it would be really helpful to know!
  3. Are there any general guidelines regarding quality check of post-tractography results for structural connectome type analyses? It wouldn’t be a problem manually checking for a couple of datasets but this is an almost impossible feat when your analyses include well over a thousand participants… Would be extremely appreciative if anyone has any suggestions or could point me to some relevant papers or resources on this topic.

Thanks very much!

This step is purely to avoid issues with dwi2fod missing out parts of the brain due to an erroneous mask. Dilation is a simple fix to fill in the odd ‘hole’ in the mask, without otherwise affecting the analysis. Technically, you can run the dwi2fod step without a mask, it’s there purely for performance reasons, so you don’t waste CPU cycles processing non-brain regions.

No, the mask we’re talking about is the whole-brain mask used to restrict CSD processing to brain voxels. What you’re talking about here is dilating the cortical ribbon mask to ensure seed points are located within WM. This is not the same thing, and not required in MRtrix, whether with ACT or otherwise (unless you use DTI for tractography, maybe). At least I don’t think anyone has suggested this would be required…?

Undoubtedly. That tutorial does not include any preprocessing, since it assumes the preprocessed HCP data – nowadays we’d recommend running dwidenoise, mrdegibbs and dwipreproc. It relies on the msmt_5tt response function estimation procedure, when the current recommendation is probably the dhollander approach (at least that’s what @ThijsDhollander reckons… :wink:). And it uses tcksift when we’d probably recommend tcksift2. There’s bound to be other subtleties, but those are the main ones that I can think of.

That’s an interesting question – but unfortunately not one I have a good answer for… I’d always recommend visual inspection of results at each stage of the pipeline, but it’s true that we don’t have methods for automated QA for large studies. Maybe other users have tips on that front that they’d like to share…?

1 Like

Regarding point 1:

In this sort of scenario, I would recommend dilating by only maybe 1-2 voxels, and then using the -seed_gmwmi option. The GM-WM interface seeding operates as described in the ACT paper. However, it relies on finding a local tissue partial volume gradient in order to localise the interface; this means that initial seeds drawn deep within the cortex will not lead to a successful streamline seed, which is problematic for FreeSurfer cortical ROIs where only voxels that predominantly contain cortex are included in the mask. So a slight dilation helps that algorithm to find the tissue interface more reliably, and with a more consistent coverage.

I have changes in the pipeline to enable unmodified use of cortical ROIs in ACT, but I hadn’t thought about how to expand that to include cortical ROI-based seeding; I’ll have to think about that one. Somewhere down the line, direct use of surface-based ROIs for both seeding and track selection will also be possible.

1 Like

I certainly do! :face_with_monocle:

We really need to update that section in the manual. I get that it refers to exactly what we did years ago at that example session, but it’s severely outdated… and many people seem to take it as advise… But for future reference: I agree with @jdtournier that the current state of that page is very outdated. I’ll try to consistently refer people to these posts until we fix the manual, so they’re aware. :slightly_smiling_face:

1 Like

Thanks very much for the very helpful answers from all! :grinning:

I have some semi-related questions regarding the HCP tutorial (I do agree it needs to be updated as lots of beginners to MRtrix/tractography tend to refer to it for advice!):

  1. With regard to the preprocessed structural files listed (aparc+aseg.nii.gz & T1w_acpc_dc_restore_brain.nii.gz), are these files that come from the HCP subject 100307’s T1w folder? There is another set of these files which lie within the MNINonLinear folder and I just wanted to be sure the tutorial was not referring to that.

  2. If yes to the above, I am assuming the aparc+aseg file is in the subject’s native space?

  3. Lastly, I am hoping to utilize a different atlas/parcellation (Schaefer’s functional parcellation) for structural connectome construction. This is provided in fsaverage space with .annot files. I would like to try this on the HCP aparc+aseg.nii.gz file used in the tutorial for starters. What is the best way to do this since it would all be in Freesurfer space? Could I simply utilize a LUT from that parcellation scheme and utilize the aparc+aseg file as it is? Or would need a transformation from the fsaverage surface to the subject’s surface? I understand mrtrix does not currently take surface ROIs. I apologize if this goes beyond the scope of mrtrix but I am unsure how this would apply to the aparc+aseg file. Would greatly appreciate any help!

Thank you so much!

It is in the AC_PC (anterior-posterior commissure) aligned space, that is the space the HCP “native space” data lie.

To get your parcellation from .annot in fsaverage space to individual aparc+aseg, you would need to

  1. Map .annot from fsaverage to individual using mri_surf2surf - I could provide more details later once I get access to my computer
  2. Map surface labels to volume aparc+aseg using mri_aparc2aseg, something like
    mri_aparc2aseg --s your_subject_id --volmask --aseg aseg --annot your_annot_in_individual_space --annot-table your_LUT


Hi @Antonin_Skoch, thanks so much for your quick reply to my post!

I guess I’m a little confused: If the aparc+aseg (the one located in 100307/T1w folder) is indeed in AC_PC aligned space or HCP “native space”, but it seems in the tutorial, we can simply utilize the standard FreesurferLUT table for defining nodes?

Sorry I should be clearer. The parcellation I am thinking of using is purely a cortical parcellation. My plan is to use that and retain the subcortical segmentation (performing similar steps to what has been done in the ISMRM HCP tutorial). The command mri_aparc2aseg seems to be for mapping the cortical labels from aparc to aseg?

I hope to test how this cortical parcellation would work on the HCP single subject - What would you think would be the best way to go about doing this? I am guessing I would also have to obtain a LUT for this specific parcellation to define the nodes?

Thanks - I appreciate any help I can get!


Dear @Michiko,

LUT file (for example FreeSurferColorLUT.txt file which is used most frequently in FreeSurfer) only maps the voxel value to particular label (structure name) and RGB values for display. It does by no means provide any voxel-wise (or vertex-wise) label map. I suggest you to read
to get grasp what is the significance of the LUT files and how to manipulate them.

One of the functions of mri_aparc2aseg is to fuse volume-based segmentation and surface-based parcellation. Its output is voxel-wise map (aparc+aseg file) containing info from both. The mapping of the surface-based parcellation to the volume is done by following: For each voxel in cortical ribbon, find nearest surface vertex, get its label from .annot and assigns that value to the voxel.

The actual cortical surface-based parcellation is encoded by .annot file. It assigns to each vertex of the surface particular label (i.e. precentral gyrus etc.). If this .annot file is defined for fsaverage subject, it can be used as an atlas using mapping from fsaverage to individual subject. (but, as FreeSurfer authors say, it is not equivalent to using per-subject labeling using probabilistic atlas).

Providing that you have custom ‘atlas’, i.e. .annot parcellation in fsaverage and LUT of it, your goal is to map this parcellation to individual subject and encode it to the voxel labels, you need to:

  1. get assignment of each vertex of the surface of the individual subject using mapping of the subject to fsaverage, i.e. run

mri_surf2surf --srcsubject fsaverage --trgsubject your_subject_id --hemi lh --sval-annot path_to_annot_in_fsaverage/lh.your_annot.annot --tval $SUBJECTS_DIR/your_subject_id/label/lh.your_annot_in_individual_space.annot

mri_surf2surf --srcsubject fsaverage --trgsubject your_subject_id --hemi rh --sval-annot path_to_annot_in_fsaverage/rh.your_annot.annot --tval $SUBJECTS_DIR/your_subject_id/label/rh.your_annot_in_individual_space.annot

( using option 6 in mri_surf2surf --help )

  1. fuse surface-based parcellation and volume-based segmentation using

mri_aparc2aseg --s your_subject_id --volmask --aseg aseg --annot your_annot_in_individual_space --annot-table your_LUT
This will produce a new aparc+aseg file you can use for defining nodes for track assignment like ISMRM HCP tutorial does.

To continue with labelconvert in the tutorial, you would also need to create new mrtrix3 LUT file which correspond to your custom atlas LUT (providing that your custom parcellation is not already included in FreeSurferColorLUT.txt file).
could help you find how to do that.


1 Like

Dear @Antonin_Skoch,

Thanks for being so detailed in your explanations of how to do this - I managed to map the cortical labels onto the volume-based segmentation file successfully.

Before I rerun parts of my analysis, I’d like to clarify with regard to the T1w_acpc_dc_restore_brain.nii.gz within the HCP subject 100307’s T1w folder. In the tutorial, it states that we could just run 5ttgen on this file. However, I understand from other threads that we would need the T1 to be transformed to diffusion space before running 5ttgen. However, when I checked, this file’s pixel dimensions are 0.7x0.7x0.7mm whereas the data.nii.gz (diffusion image file) has pixel dimensions of 1.25x1.25x1.25mm. Is this right, or am I using the wrong files to follow the tutorial? The aparc+aseg.nii.gz file also follows 0.7x0.7x0.7mm.

Side note: There is also another T1 file called T1w_acpc_dc_restore_1.25.nii.gz within the diffusion folder that has not been through skull strip. The pixel dimensions for this file is 1.25x1.25x1.25.

I guess it’s a little confusing both because the HCP data folder contains varying files that are similar, and in the tutorial it states “Note that it is not necessary to use a tissue-segmented image that has the same resolution as the diffusion images”. Thus it’s a little confusing whether registration and transformation to diffusion space is needed?

Thanks, I really appreciate your help! Maybe some others who wrote the tutorial can chime in as well.

Best wishes,

Dear @Michiko,

I am working with HCP pipelines, but I cannot give answer specific to HCP data since I am not working with them (probably the dwi and structural HCP data are already in register?).

But generally, you have to assure that the structural and diffusion data are in register. This, however, does not mean that their voxel size has to be identical. Registering and regridding (reslicing) are two distinct things. You can bring the data to common coordinate systems without reslicing by adjusting their vox2RAS matrix in image header.

Registering structural to dwi (without reslicing structural, only modifying vox2ras matrix) I would do as follows:

Extract b=0 images and averaging them:
dwiextract -bzero dwi.mif - | mrmath - mean -axis 3 dwi_b0_mean.nii.gz

Register dwi to structural:
bbregister --s $subject_id --mov dwi_b0_mean.nii.gz --init-coreg --lta dwi_b0_mean2fs.lta --dti

For 5ttgen I personally use norm.mgz, optionally with refined skullstrip by masking by PreFreeSurfer brain mask (maybe such masking is not necessary with HCP data)

mri_mask $SUBJECTS_DIR/$subject_id/mri/norm.mgz ${hcp_rootdir}/${subject_ID}/T1w/T1w_acpc_brain_mask.nii.gz norm_preFS_brainmasked.nii.gz

Getting subject’s norm.mgz in register to dwi (without reslicing)
mri_vol2vol --mov norm_preFS_brainmasked.nii.gz --targ dwi_b0_mean.nii.gz --lta-inv dwi_b0_mean2fs.lta --o norm2dwi.mgz --no-resample

A note: The structural PreFreeSurfer data are AC-PC aligned, if the same is not applied for dwi, then there is optional step to bring dwi to AC-PC alignment before running bbregister to avoid possible failure in registration. There should be acpc.mat file in the HCP data directory which can be used for that. But then the steps are more complex.

A note 2: I personally currently prefer (maybe in contrast to common recommendations, I would appreciate a comment from other experts on this!) to do opposite, i.e. register dwi to structural mainly for following reason:

The structural data processed by HCP pipeline are AC-PC aligned. This is advantageous for all steps which require registration to MNI since the registration have good initial conditions. This truthfully matters for FSL-first (as part of 5ttgen and labelsgmfix), where I found that changing initial head position can have profound effect on subcortical segmentation result.
I think that the only pitfall in (rigid body!) transforming dwi data after they have been preprocessed by dwipreproc, is to pay attention to rotate your .bvec as well. However, when using mif format with dwi data, you no longer have to worry about it since mrtransform does it seamlessly.
You could also do 5ttgen in AC-PC aligned space and transform to dwi space resulting 5tt image (and also FreeSurfer parcellation image after labelsgmfix) but this seems to me less convenient.


Hi @Antonin_Skoch

Thanks for the very detailed suggestions - Do you know what would be the best way of checking if the data are in common coordinate systems? I have tried overlaying them and examining visually but I’m wondering if there is some quantitative way of doing this.

Thanks again for being super helpful!

Hi @michiko,

I cannot think of any reliable way other than to overlaying the data and inspecting the registration quality visually.

In theory, you could grab some cost function (like mean square error, correlation ratio, mutual information, etc.), gather a representative group of visually well aligned datasets, get mean and standard deviation of the value of cost function and compare it with the computed value of cost function of your data in question. I personally do not have any experience with that, so I do not know how this approach could work in practice.

1 Like

I’d visually compare them. If you have dozens of images I’d script opening one image overlaid with the other and grab a screenshot with mrview. If you have hundreds of images and you want to narrow down which ones to look at or need a quantitative measure, you can use mrmetric.

1 Like

Hi @rsmith,

Thanks for your reply regarding this. Just to be really certain of what is going on with ACT, do you mean that in a typical scenario without specifying the seed_gwmi option, ACT does not do GM-WM interface seeing but only WM seeding?

Based on what you know (the literature, personal experience), is there a significant difference with dilating the cortical ribbon and providing this mask during ACT? I’ve also read that you mentioned in a previous post that tck2connectome ultimately does a 2mm radial search to find the nearest GM node - would this be sufficient then if one were to be doing a connectome type analysis?

Lastly, taking the example from the structural connectome tutorial, if one were to be using the tissue-segmented image provided by FSL for the 5ttgen step, how would you generate this, given that the GM nodes are specified in the aparc+aseg file by Freesurfer? It does get a little confusing with this since I have the impression you would want to dilate 1-2mm of the GM labels into the WM but then you would be providing a tissue-segmented image using FSL instead.

Thank you so much.

Just to be really certain of what is going on with ACT, do you mean that in a typical scenario without specifying the -seed_gmwmi option, ACT does not do GM-WM interface seeing but only WM seeding?

Any candidate seed point generated by the seeding mechanism must be “validated” according to the 5TT image when ACT is used. This can be in WM, or (since a prior update) within sub-cortical GM. Only the GM-WM interface seeding mechanism is permitted to move the candidate seed point, where the seed point is moved onto the interface.

Based on what you know (the literature, personal experience), is there a significant difference with dilating the cortical ribbon and providing this mask during ACT?

I haven’t really played with this a whole lot; or if I did, I don’t recall the details. I do recall that there’s a degree of perturbation of candidate seed points in an attempt to make the coverage of seed points as homogeneous as possible on the interface despite the discretisation of any ROI input. Command 5tt2gmwmi also outputs a floating-point image rather than a binary image, trying to improve that homogeneity. But the encapsulation of the FreeSurfer cortical ROI voxel output to only the interior of the cortex means that it’s both not at the GM-WM interface where we want it, and not homogeneous across a surface as we ideally want it.

If you want to assess the impacts of these things yourself, you can do the following:

  • Use the -seed_gmwmi option with -algorithm seedtest
  • Checkout and build the dev branch of MRtrix3
  • Load the resulting track file into mrview, and within the Tractography tool, change “Geometry” to “Points”

I’ve also read that you mentioned in a previous post that tck2connectome ultimately does a 2mm radial search to find the nearest GM node - would this be sufficient then if one were to be doing a connectome type analysis?

The goal of this heuristic is to be “sufficient” for a connectome-type analysis, for any type of image-based parcellation. However this heuristic does not solve the issue if trying to seed streamlines based on a cortical ROI; hence, the suggestion of explicitly dilating the ROI prior to streamline seeding.

Lastly, taking the example from the structural connectome tutorial, if one were to be using the tissue-segmented image provided by FSL for the 5ttgen step, how would you generate this, given that the GM nodes are specified in the aparc+aseg file by Freesurfer?

I don’t quite follow your logic here, but I think that the confusion is arising due to a false equivalence between ACT and streamline seeding. Streamline seeds can be drawn from a multitude of different sources (just check the tckgen help page). In cases where the source of streamline seeds is an image, this image does not even need to lie on the same voxel grid as the DWIs or an ACT 5TT image; the seeds are drawn in scanner space. If ACT is being used, then the seed point must be validated against the 5TT image to ensure that the seed is biologically plausible, and tracking from that point is possible. Therefore there is no issue whatsoever with using a FreeSurfer-derived image for determining streamline seeds but a 5TT image derived predominantly using FSL tools; just like there’s no issue with manually drawing ROIs for tracking, and deriving a 5TT image from FreeSurfer. But there’s a fundamental mismatch between wanting streamline seeds at the GM-WM interface due to the constraints within ACT, and having ROIs that only include voxels entirely within the cortex: the two don’t overlap. The ACT GM-WM Interface seeding mechanism will do its best to find that interface given a candidate seed point drawn from the seed ROI, but you can give it a bit of a helping hand by dilating the seed ROI to improve the overlap a bit. So I would encourage appreciating the difference between a “tissue segmentation” (e.g. the ACT 5TT image; note that this doesn’t contain “GM” and “WM” voxels, but a partial volume fraction of all tissues for all voxels, and hence “dilating” is not well-posed) and a “Region Of Interest” (which may or may not be derived based on some segmentation approach).