Registering T1 to dwi and running ROI to ROI analysis

Hi Everyone,

I am almost completely new to dwi analysis and MRtrix, so I apologise if these are fairly obvious questions.

I want to do fascicle tracking using a region-to-region method. e.g. hMT+ to hMT+. I have followed this pipeline and have ended with a whole brain tractography with 100K tracks. I was hoping for some advice on how to register my T1 anatomical scan to the dwi data (Flirt/fnirt would be my first guess, but if the MRtrix community advise another package, please let me know), so that I can then transform the ROI’s created in the T1 space into the dwi space, and then use these for the ROI to ROI analysis. Which brings me onto my next question. Does anyone have any advice on the best way to do this? I assume it would be something along the lines of running: tckgen wmfod.mif MTtracks.tck -seed_image lh_MTroi_dwiSpace.mif -mask mask.mif -select 100k. Where lh_MTroi_dwiSpace.mif is an anatomical ROI transformed into the dwi space. I also assume this would be the tracks running through this ROI and not necessarily the ROI to ROI tracks? Once again, any advice on this would be appreciated. I also see track numbers in the order of 10 million, not 10K as per the tutorials. Is this simply altered by changing the -select 100k to -select 10m?

For completeness here is each step I currently take: Please feel free to tell me if anything is obviously incorrect here.

Convert from nii.gz to .mif

mrconvert AP.nii.gz AP.mif -fslgrad AP.bvec AP.bval

mrconvert PA.nii.gz PA.mif -fslgrad PA.bvec PA.bval

Motion/distortion correction

mrconvert PA.mif -coord 3 0 PA_b0.mif

mrconvert AP.mif -coord 3 0 AP_b0.mif

mrcat AP_b0.mif PA_b0.mif b0s.mif -axis 3

Preprocess dwi data

dwipreproc AP.mif dwi_preprocessed.mif -pe_dir AP -rpe_pair -se_epi b0s.mif

Mask dwi data

dwi2mask dwi_preprocessed.mif mask.mif

Estimate response function for spherical deconvolution

dwi2response dhollander dwi_preprocessed.mif wm_response.txt, gm_response.txt csf_response.txt

Perform constrained spherical deconvolution on the response function

dwi2fod msmt_csd dwi_preprocessed.mif response_wm.txt wmfod.mif response_gm.txt gm.mif response_csf.txt csf.mif

Perform whole-brain tractography with 100,000 tracks

tckgen wmfod.mif whole_brain_100k.tck -seed_image mask_dwi.mif -select 100k

Best wishes,
Mason

Welcome Mason!

I was hoping for some advice on how to register my T1 anatomical scan to the dwi data (Flirt/fnirt would be my first guess, but if the MRtrix community advise another package, please let me know),

We unfortunately still do not have the capability to naively register images of different modalities within MRtrix3; so yes, most people revert to using FSL flirt for this purpose. The most recent example is in this thread. You don’t want to be using fnirt as one does not expect there to be non-linear distortions between two images of the same brain taken minutes apart.

Given the prevalence of this processing step, it could perhaps do with its own Wiki entry? Alternatively it wouldn’t be too hard for anyone to write a script within the MRtrix3 Python API to automate precisely this process :wink::crossed_fingers:

… use these for the ROI to ROI analysis. Which brings me onto my next question. Does anyone have any advice on the best way to do this?

tckgen wmfod.mif MTtracks.tck -seed_image lh_MTroi_dwiSpace.mif -mask mask.mif -select 100k

I also assume this would be the tracks running through this ROI and not necessarily the ROI to ROI tracks?

Correct. The only constraint that you are providing to tckgen that relates to your specific hypothesis is that you want the streamlines to start in the left hemisphere ROI; it is entirely oblivious at that point to the existence of a homologous ROI. Most likely you want to be using the -include option in tckgen, as documented here. Note that while it’s also technically possible to generate streamlines emanating from one ROI using tckgen -seed_image and then select only those streamlines intersecting another ROI using tckedit -include (and you might find discussions on this elsewhere on the forum), doing both steps in tckgen is a little bit simpler.

I also see track numbers in the order of 10 million, not 10K as per the tutorials. Is this simply altered by changing the -select 100k to -select 10m?

10 million is a pretty arbitrary number that is often used for whole-brain tractography if one wants the streamline counts of all individual white matter bundles of interest to have reasonable quantities of streamlines. If you are only reconstructing one specific pathway, you don’t need anywhere near that number. Indeed even 100k is probably overkill for targeted tracking. Go with the tutorial’s 10k, and make an assessment from there.

For completeness here is each step I currently take:

That all looks pretty standard.

One caveat maybe to be aware of, just in case you’re copy-pasting the tutorial that is written assuming a certain type of data rather than tailoring for your own: Generating the file “b0s.mif” currently assumes strongly that the first volume in each image series is a b=0 image, and weakly that this is the only b=0 volume present in each series. A more robust approach is to use dwiextract -bzero, which will examine the diffusion gradient table to find which images are b=0’s. If you have multiple b=0 volumes within each series, this may make estimation of the susceptibility distortion field a little more robust.

Another is that in your tckgen call, you’re providing your brain mask image as a seed image - meaning that all streamlines generated will have been initiated within that image - but you are not additionally providing that image as input to the -mask option, which would otherwise constrain the streamlines to propagate only within that mask. So with your current usage it’s possible that you might actually observe some streamline vertices outside of your brain mask.

Rob