Full processing pipeline with single phase-encoding direction in MS

Dear developers,

I am relatively new to diffusion analysis and have a few questions related to processing options for the goal of building structural connectomes. I am working with DWI data from individuals with multiple sclerosis with a single b=0 and remaining b=2000 for 64 directions. All images were collected in AP. Please see my processing script below punctuated with questions.

#Denoise, eddy + motion correction, estimate response function
dwidenoise dwi.mif dwi_den.mif -noise noise.mif -force
dwifslpreproc *dwi_den.mif *dwi_den_preproc.mif -nocleanup -rpe_none -pe_dir AP
dwi2response dhollander dwi_den_preproc.mif wm.txt gm.txt csf.txt -voxels voxels.mif -force

#Create mask (used this instead of the built-in ants/fsl options with dwi2mask as they were retaining substantial areas of the neck)
bet2 …/*dwi.nii.gz dwi_bet -m
mrconvert dwi_bet_mask.nii.gz dwi_bet_mask.mif
mrview dwi_bet_mask.mif

Ideally, I’d like to do single-shell 3-tissue CSD (ss3t_csd) and this:

#Estimate fiber orientation distribution.
Ideally: dwi2fod msmt_csd dwi_den_preproc.mif -mask dwi_bet_mask.mif wm.txt wmfod.mif gm.txt gmfod.mif csf.txt csffod.mif -force

But as indicated with only 2 b-vals, I’m unable to estimate all 3 tissue types. So does that only leave below as an option?

dwi2response tournier dwi_den_preproc.mif response.txt
dwi2fod csd dwi_den_preproc.mif response.txt fod.mif
mrview vf.mif -odf.load_sh wmfod.mif

This is the fod output. Is it too sparse?

Q: For the next steps, would it be better to use a freesurfer parcellation that was fed a binary lesion mask to improve segmentation (especially since I can’t perform distortion correction)?

Thank you for your help! It’s much appreciated.

-Heena

Hi Heena,

#Create mask (used this instead of the built-in ants/fsl options with dwi2mask as they were retaining substantial areas of the neck)

  • There are no ANTs / FSL options built in to dwi2mask; are you thinking of dwibiascorrect? Both of those algorithms will invoke dwi2mask if a brain mask is not explicitly provided, but the masking process itself is nevertheless done by MRtrix3.

  • Note that the prior dwi2response call will also invoke MRtrix3’s dwi2mask since a mask was not explicitly provided. You may want to confirm that the response function voxel selection was not influenced by errors in this mask.

  • In 3.1.0 the whole interface around DWI brain masking will change, which will make use of e.g. FSL’s bet2 a lot more convenient. See e.g. this page.

But as indicated with only 2 b-vals, I’m unable to estimate all 3 tissue types. So does that only leave below as an option?

What generally gets recommended in this scenario (but admittedly should perhaps be moved to the main documentation rather than scattered in various threads here) is to use the multi-tissue CSD algorithm, but provide only the WM and CSF response functions. This decreases contributions toward ODF magnitude from free fluid, and also utilises the hard-negativity constraint of that algorithm rather than the soft constraint of the original CSD algorithm.

This is the fod output. Is it too sparse?

It’s unclear exactly what you are referring to here as “sparse”. The FODs themselves are also too small to really assess properly; this is simply the size scaling that can be controlled via the ODF plot tool in mrview.

For the next steps, would it be better to use a freesurfer parcellation that was fed a binary lesion mask to improve segmentation (especially since I can’t perform distortion correction)?

I think there’s too many layers of ambiguity in this question for me to really be able to do anything with. It’s unclear how one would expect to use a FreeSurfer parcellation if EPI distortion correction cannot be performed, nor what the intended purpose of segmentation is.

Cheers
Rob

Hi Rob,

Thanks for the clarification for my last post. Using your suggestions and additional reading, I have changed my preprocessing pipeline to the below 8 steps. Could you kindly confirm that there are no mrtrix input/output errors from this pipeline? I recently changed my approach from using the gm/wm boundaries to using dynamic seeding for generating streamlines. Could you please suggest any ways to optimize this pipeline that may be glaring to your expert lens?

Just as a reminder, still working with a dataset of people with multiple sclerosis, with 64 directions @ b=2000 and a single b=0.

Step 1: QC

  • I used DTIPrep’s default QC template to identify artefactual gradients and then removed them from the raw diffusion data. Modified the bval and bvec files accordingly.

Step 2: Motion + Eddy current correction

  • dwifslpreproc 001_dwi.mif 001_dwi_preproc.mif -nocleanup -scratch ${pwd}/sub-001/ses-pre/dwi/mrtrix -rpe_none -pe_dir AP -force

Step 3: Generate brain mask

  • bet2 *dwi.nii.gz 001_dwi -m
  • mrconvert 001_dwi_mask.nii.gz 001_dwi_mask.mif -force

Step 4: Constrained Spherical Deconvolution

  • dwi2response dhollander 001_dwi_preproc.mif wm.txt gm.txt csf.txt -voxels voxels.mif -force

Step 5: Estimate FODs.

  • dwi2fod msmt_csd 001_dwi_preproc.mif -mask 001_dwi_mask.mif wm.txt wmfod.mif csf.txt csffod.mif -force

Step 6: Generate 5TT for ACT.

  • mrconvert ${pwd}/sub-001/ses-pre/anat/sub-001_ses-pre_run-01_T1w.nii.gz 001_T1.mif -force
  • 5ttgen fsl 001_T1.mif 001_5tt_nocoreg.mif -force
  • dwiextract -bzero 001_dwi_preproc.mif 001_b0_preproc.mif -force
  • mrconvert 001_b0_preproc.mif 001_b0_preproc.nii.gz -force
  • mrconvert 001_5tt_nocoreg.mif 001_5tt_nocoreg.nii.gz -force
  • fslroi 001_5tt_nocoreg.nii.gz 001_5tt_vol0.nii.gz 0 1
  • flirt -in 001_b0_preproc.nii.gz -ref 001_5tt_vol0.nii.gz -interp nearestneighbour -dof 6 -omat 001_diff2struct_fsl.mat
  • transformconvert 001_diff2struct_fsl.mat 001_b0_preproc.nii.gz 001_5tt_nocoreg.nii.gz flirt_import 001_diff2struct_mrtrix.txt -force
  • mrtransform 001_5tt_nocoreg.mif -linear 001_diff2struct_mrtrix.txt -inverse 001_5tt_coreg.mif -force

Step 7: Generate streamlines.

  • tckgen -act 001_5tt_nocoreg.mif -backtrack -seed_dynamic vf.mif -nthreads 5 -maxlength 250 -cutoff 0.06 -select 10000000 wmfod.mif 001_tracks_10M.tck -force
  • tckedit 001_tracks_10M.tck -number 200k 001_smallerTracks_200k.tck -force
  • tcksift2 -act 001_5tt_coreg.mif -out_mu sift_mu.txt -out_coeffs sift_coeffs.txt -nthreads 6 001_tracks_10M.tck wmfod.mif 001_sift_1M.txt -force

Step 8: Build connectome.

  • labelconvert /path/to/glasser+aseg.mgz /path/to/glasser_aseg_LUT.txt /path/to/mrtrix3/labelconvert/fs_default.txt sub-001_glasser+aseg_parcels.mif -force
  • mrtransform sub-001_glasser+aseg_parcels.mif -interp nearest -linear 001_diff2struct_mrtrix.txt -inverse -datatype uint32 sub-001_glasser+aseg_parcels_coreg.mif -force
  • tck2connectome -symmetric -zero_diagonal -scale_invnodevol -tck_weights_in 001_sift_1M.txt 001_tracks_10M.tck sub-001_glasser+aseg_parcels_coreg.mif 001_glasser+aseg_parcels_coreg.csv -out_assignment assignments_001_glasser+aseg_parcels_coreg.csv -force

Thank you for your suggestions and for developing this powerful tool. I and the community have much gratitude!

Best,
Heena

Dear Heena,

The ACT method requires correction of magnetic susceptibility induced distortion (which can be seen in the depression of the front of the head in the image above). If you don’t have reverse-phase encoded B0 images, you could still do this with the SynB0 method. You will also get the transformation matrix to coregister your anatomical image to the diffusion image, which will be more accurate than the FLIRT transformation you now use.

Apart from that, the new pipeline is missing the denoising and unringing steps, which could go in at the start. Also, if you are using a FreeSurfer parcellation to build your tractogram, it might be better to use the same parcellation for the 5TT image, using the hsvs option. If you want to do that, I would suggest running your FreeSurfer pipeline first, and then use the FreeSurfer normalized T1 as the starting point for the SynB0 step and then apply the SynB0 output transform to the parcellation image to be used for 5ttgen and labelconvert.

Hope this helps,
Nick

1 Like

Hello @NicDC ,

Would you please provide me with SynB0 method software? I will really appreciate this.

Many Thanks,
Suren

Hi Suren,

Normally, you can just download and run the container using Docker.

The instructions are here:

Another option is to just use their src/pipeline.sh bash script (after modifying the input-output paths, and paths to your local installations and python versions etc) if you already have FreeSurfer, FSL, ANTs and PyTorch set up (requires CUDA GPU).

Hope this helps,
Nick

1 Like