Beginner for Mrtrix3

Hello, I began to use mrtrix on my Mac and quite new to diffusion MRI, I got some dMRI data from dipy dataset from here http://nipy.org/dipy/examples_built/quick_start.html#example-quick-start
the data have one image with .nii.gz format and also .bvalue, .bvector files

And i tired my first command line with, dwipreproc -rpe_none 2 HARDI193.nii.gz test.nii

it gave me some error:

dwipreproc:
dwipreproc: Note that this script makes use of commands / algorithms that have relevant articles for citation; INCLUDING FROM EXTERNAL SOFTWARE PACKAGES. Please consult the help page (-help option) for more information.
dwipreproc:
dwipreproc: Generated temporary directory: /tmp/dwipreproc-tmp-D23H72/
Command: mrconvert /Users/junhao.wen/.dipy/sherbrooke_3shell/HARDI193.nii.gz /tmp/dwipreproc-tmp-D23H72/series.mif
dwipreproc: Changing to temporary directory (/tmp/dwipreproc-tmp-D23H72/)
mrinfo: [ERROR] floating-point sequence specifier is empty
dwipreproc: [ERROR] Number of volumes in gradient table does not match input image
dwipreproc: Changing back to original directory (/Users/junhao.wen/.dipy/sherbrooke_3shell)
dwipreproc: Contents of temporary directory kept, location: /tmp/dwipreproc-tmp-D23H72/

I found your tutorial is very good, but because right now, I dont have my dMRI data, so it would be great if you offer some tutorial data for the newcomers, by doing this, I guess it would be much more clear for the new mrtrixers:)

Thanks in advance

The main issue is hinted at here:

dwipreproc needs to know what diffusion encoding was used to acquire the data, and this information can’t be included within the NIfTI images themselves. So you need to provide the bvecs/bvals info separately using the -fslgrad option (assuming I’ve got the names right - amend as required):

dwipreproc -rpe_none 2 HARDI193.nii.gz test.nii -fslgrad HARDI193.bvector HARDI193.bvalue

But that’s not all: you will also want to get the corrected bvecs after motion correction. Since your output image is in NIfTI format, this information won’t be provided in the output image either. To get these, you need to add the -export_grad_fsl option:

dwipreproc -rpe_none 2 HARDI193.nii.gz test.nii -fslgrad HARDI193.bvector HARDI193.bvalue -export_grad_fsl HARDI193_corrected.bvector HARDI193_corrected.bvalue

As you can see, this makes for some rather cumbersome command lines… This is one of the (many) reasons we recommend you use MRtrix3’s own .mif format, which does store this information in the image header. The above would then be accomplished by first converting your input data to .mif (or .mif.gz if you’re tight on storage) with the diffusion gradient table included:

mrconvert -fslgrad HARDI193.bvector HARDI193.bvalue HARDI193.nii.gz HARDI193.mif

which then makes for a much cleaner dwipreproc command:

dwipreproc -rpe_none 2 HARDI193.mif test.mif

Note that this produces another .mif image, which will include the corrected gradient table after motion correction. If you really want to produce a NIfTI image, you’ll still need to use the -export_grad_fsl option.


One final note: are you sure your PE direction should be 2? That corresponds to the z (inferior-superior) axis, which is quite unusual. It’s more common for it to be along y (anterior-posterior) axis, in which case you’d want a 1 here.

Thanks for the rapid answer, right now, i am quite clear.
You are right, the dMRI data is download from dipy, and they dont give the info like PE direction, and really i dont know if i put 2 here is correct or not, that is why I suggest you put some tutorial data in the tutorial section, because if someone like me who dont have our own data and know little about dMRI data, it will be quite difficult for us to follow, on the contrary, FreeSurfer tutorial offers the tutorial data, firstly, the users will have a better understanding for the data then follow the tutorial, this would be a suggestion for you, but thanks for you explanation, it is quite clear for me:)

Thanks
Hao

Tutorial data is a good suggestion, and has been proposed before. In fact, we have an open issue for it… We just haven’t had the time to do anything about it unfortunately. Hopefully we’ll get round to this eventually!

Yes, but you have done a good job already, thanks for your reply:)

Good day

One final note: are you sure your PE direction should be 2? That corresponds to the z (inferior-superior) axis, which is quite unusual. It’s more common for it to be along y (anterior-posterior) axis, in which case you’d want a 1 here.

I suspect users more frequently use the ‘LR’ / ‘RL’ / ‘AP’ / ‘PA’ / ‘IS’ / ‘SI’ flags to indicate their phase encoding direction, rather than an axis number (which some people will expect to start from 0, others will expect to start from 1). That might feel easier for you.

I’m hoping that the completely-re-written dwipreproc and improvements to DICOM import in the coming update will simplify this whole process…

Dear All:

I have some similar issue, but for my case I used a .mif file as an input, here is script:

dwipreproc -rpe_none 2 ./Downloads/anger/anger_mrconverted.mif ./Downloads/anger/anger_out.mif
and I got this error:

dwipreproc: [ERROR] Number of volumes in gradient table does not match input image

Can you tell me what is the problem?

Have a nice day,

Aldo

This question answered here

Hi, Donald,

I try to use ACT followed the instruction of "http://community.mrtrix.org/t/beginner-connectome-pipeline-updated/373 "My commands are as follows:

  1. I run dwidenoise command:
    $ dwidenoise DTI/ DTI_denoise.mif

  2. I generate the DTI.mif file as well as DTI.bvecs and DTI.bvals files with the command:
    $ mrconvert DTI_denoise.mif DTI_denoise_convert.mif -export_grad_fsl DTI_denoise_convert.bvecs DTI_denoise_convert.bvals

  3. I run dwipreproc command:
    $ dwipreproc -rpe_none 2 DTI_denoise_convert.mif DTI_denoise_convert_preproc.mif

After the 3th step, the output is:

dwipreproc: 
dwipreproc: Note that this script makes use of commands / algorithms that have relevant articles for citation; INCLUDING FROM EXTERNAL SOFTWARE PACKAGES. Please consult the help page (-help option) for more information.
dwipreproc: 
dwipreproc: Generated temporary directory: /tmp/dwipreproc-tmp-M88GP6/
Command: mrconvert /media/windows-share/DTI_denoise_convert.mif /tmp/dwipreproc-tmp-M88GP6/series.mif
dwipreproc: Changing to temporary directory (/tmp/dwipreproc-tmp-M88GP6/)
Command: mrconvert series.mif dwi_pre_topup.nii -stride -1,+2,+3,+4
dwipreproc: Creating phase-encoding configuration file
Command: dwi2mask series.mif - | maskfilter - dilate - | mrconvert - mask.nii -datatype float32 -stride -1,+2,+3
Command: mrconvert series.mif - -stride -1,+2,+3,+4 | mrinfo - -export_grad_fsl bvecs bvals
Command: eddy --imain=dwi_pre_topup.nii --mask=mask.nii --index=indices.txt --acqp=config.txt --bvecs=bvecs --bvals=bvals --out=dwi_post_eddy

Then, it just stopped here, without any error and output.
After half an hour, It still in this status and I have to cancel this command with the output:

^CTraceback (most recent call last):
  File "/home/brain/mrtrix3/scripts/dwipreproc", line 317, in <module>
    runCommand(eddy_cmd + ' --imain=' + eddy_in + ' --mask=mask.nii --index=indices.txt --acqp=config.txt --bvecs=bvecs --bvals=bvals' + eddy_in_topup + ' --out=dwi_post_eddy')
  File "/home/brain/mrtrix3/scripts/lib/runCommand.py", line 104, in runCommand
    (stdoutdata, stderrdata) = process.communicate()
  File "/usr/lib/python2.7/subprocess.py", line 799, in communicate
    return self._communicate(input)
  File "/usr/lib/python2.7/subprocess.py", line 1409, in _communicate
    stdout, stderr = self._communicate_with_poll(input)
  File "/usr/lib/python2.7/subprocess.py", line 1463, in _communicate_with_poll
    ready = poller.poll()
KeyboardInterrupt

And also, I don’t know the PE direction is 2 or not, and I tried different values, however the results are the same.
oh, right, my mrinfo is as follows. Probably, we cannot find the PE direction here, but I think it is necessary to show it:

************************************************
Image:               "DTI_30_average-2_denoise.mif"
************************************************
  Dimensions:        128 x 128 x 44 x 62
  Voxel size:        2 x 2 x 3 x ?
  Data strides:      [ -2 -3 4 1 ]
  Format:            MRtrix
  Data type:         32 bit float (little endian)
  Intensity scaling: offset = 0, multiplier = 1
  Transform:                0.999   5.447e-08     0.04556      -128.9
                          0.00254      0.9984    -0.05571      -112.4
                         -0.04549     0.05576      0.9974         -43
  comments:          zhou (090726cont) [MR] DTI_30_average-2
                     study: head Union Hospital
                     DOB: 26/07/1995
                     DOS: 26/07/2009 13:12:41
  dw_scheme:         [ 62 entries ]
  mrtrix_version:    0.3.15-266-g2bf78387

I don’t know how it happened. It seems that I interrupt the process. Really? If so, the process is too slow…
Do you have any ideas?

thnaks,
Chaoqing

Yes, eddy can take a while, it’s quite an intense process. If you have a CUDA-capable graphics card (i.e. NVIDIA with up to date drivers), then you could try running with the -cuda option to run the much faster GPU-accelerated version. Otherwise you should expect relatively long run times.

Also, your second step is redundant: the dwipreproc script will take care of exporting the bvecs/bvals internally as needed. And if your reason for this step is to use them in later processing stages, then that’s also not a good idea: dwipreproc will correct the bvecs for motion if your version of eddy performs the correction. So if you need to export the bvecs/bvals, you want to do that after dwipreproc.

Dear @SuperClear,

for your information, my dwipreproc (which internally invokes eddy) without CUDA took about 4 hours to finish (it could also use openmp to speed up the process somewhat). The CUDA version on nVIDIA TESLA K20 card takes about 8 minutes.

Antonin

1 Like

Wow, that’s a drastic difference. :astonished: Not entirely unexpected, but still.

I think this is in accord to 100x speedup on a single GPU compared to single CPU core reported here:

https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FslInstallation#Running_bedpostX_on_a_GPU_or_GPU_cluster