I am relatively new to diffusion analysis and have a few questions related to processing options for the goal of building structural connectomes. I am working with DWI data from individuals with multiple sclerosis with a single b=0 and remaining b=2000 for 64 directions. All images were collected in AP. Please see my processing script below punctuated with questions.
#Create mask (used this instead of the built-in ants/fsl options with dwi2mask as they were retaining substantial areas of the neck)
bet2 …/*dwi.nii.gz dwi_bet -m
mrconvert dwi_bet_mask.nii.gz dwi_bet_mask.mif
mrview dwi_bet_mask.mif
Ideally, I’d like to do single-shell 3-tissue CSD (ss3t_csd) and this:
Q: For the next steps, would it be better to use a freesurfer parcellation that was fed a binary lesion mask to improve segmentation (especially since I can’t perform distortion correction)?
#Create mask (used this instead of the built-in ants/fsl options with dwi2mask as they were retaining substantial areas of the neck)
There are no ANTs / FSL options built in to dwi2mask; are you thinking of dwibiascorrect? Both of those algorithms will invoke dwi2mask if a brain mask is not explicitly provided, but the masking process itself is nevertheless done by MRtrix3.
Note that the prior dwi2response call will also invoke MRtrix3’s dwi2mask since a mask was not explicitly provided. You may want to confirm that the response function voxel selection was not influenced by errors in this mask.
In 3.1.0 the whole interface around DWI brain masking will change, which will make use of e.g. FSL’s bet2 a lot more convenient. See e.g. this page.
But as indicated with only 2 b-vals, I’m unable to estimate all 3 tissue types. So does that only leave below as an option?
What generally gets recommended in this scenario (but admittedly should perhaps be moved to the main documentation rather than scattered in various threads here) is to use the multi-tissue CSD algorithm, but provide only the WM and CSF response functions. This decreases contributions toward ODF magnitude from free fluid, and also utilises the hard-negativity constraint of that algorithm rather than the soft constraint of the original CSD algorithm.
This is the fod output. Is it too sparse?
It’s unclear exactly what you are referring to here as “sparse”. The FODs themselves are also too small to really assess properly; this is simply the size scaling that can be controlled via the ODF plot tool in mrview.
For the next steps, would it be better to use a freesurfer parcellation that was fed a binary lesion mask to improve segmentation (especially since I can’t perform distortion correction)?
I think there’s too many layers of ambiguity in this question for me to really be able to do anything with. It’s unclear how one would expect to use a FreeSurfer parcellation if EPI distortion correction cannot be performed, nor what the intended purpose of segmentation is.
Thanks for the clarification for my last post. Using your suggestions and additional reading, I have changed my preprocessing pipeline to the below 8 steps. Could you kindly confirm that there are no mrtrix input/output errors from this pipeline? I recently changed my approach from using the gm/wm boundaries to using dynamic seeding for generating streamlines. Could you please suggest any ways to optimize this pipeline that may be glaring to your expert lens?
Just as a reminder, still working with a dataset of people with multiple sclerosis, with 64 directions @ b=2000 and a single b=0.
Step 1: QC
I used DTIPrep’s default QC template to identify artefactual gradients and then removed them from the raw diffusion data. Modified the bval and bvec files accordingly.
Step 2: Motion + Eddy current correction
dwifslpreproc 001_dwi.mif 001_dwi_preproc.mif -nocleanup -scratch ${pwd}/sub-001/ses-pre/dwi/mrtrix -rpe_none -pe_dir AP -force
The ACT method requires correction of magnetic susceptibility induced distortion (which can be seen in the depression of the front of the head in the image above). If you don’t have reverse-phase encoded B0 images, you could still do this with the SynB0 method. You will also get the transformation matrix to coregister your anatomical image to the diffusion image, which will be more accurate than the FLIRT transformation you now use.
Apart from that, the new pipeline is missing the denoising and unringing steps, which could go in at the start. Also, if you are using a FreeSurfer parcellation to build your tractogram, it might be better to use the same parcellation for the 5TT image, using the hsvs option. If you want to do that, I would suggest running your FreeSurfer pipeline first, and then use the FreeSurfer normalized T1 as the starting point for the SynB0 step and then apply the SynB0 output transform to the parcellation image to be used for 5ttgen and labelconvert.
Normally, you can just download and run the container using Docker.
The instructions are here:
Another option is to just use their src/pipeline.sh bash script (after modifying the input-output paths, and paths to your local installations and python versions etc) if you already have FreeSurfer, FSL, ANTs and PyTorch set up (requires CUDA GPU).