Mrconvert axes issue

Hi all,

I am converting nii/bvec/bval to .mif. This causes the orientations of the image to be lost in ITKbased programs (i.e., ITKsnap).

For example the following command:
mrconvert original.nii.gz new.nii.gz
will produce a complete flip of the image upside down, which seems to happen because voxels axes or strides are not correct. Here is the output, the upright image is the original, the flippes is the new.

I tried to search online if -axes can fix this, but can’t find much documentation.

Any help?

That’s very strange. What does mrinfo report for the original and converted images? That might give us a clue as to what the problem is…

Found the solution, but not the problem.
Updating MRtrix to the most recent version (Oct 6 2016) resolved the issue. My old one was from April 20. I have also pointed to a newer python 2.7.9 instead of the older 2.6.6. Maybe this is related too.

Mrinfo showed same headers in each case. Note that the erroneous conversion also caused the latest ITKsnap (3.6) to crash, while the older ITKsnap (3.4) could open the image (as did MRIcron too). Hope it helps someone in the future.

ORIGINAL IMAGE


Dimensions: 96 x 96 x 55 x 80
Voxel size: 2.5 x 2.5 x 2.5 x 7.5
Data strides: [ -1 2 3 4 ]
Format: NIfTI-1.1 (GZip compressed)
Data type: signed 16 bit integer (little endian)
Intensity scaling: offset = 0, multiplier = 1
Transform: 1 0 0 -111.3
-0 1 0 -85.89
-0 0 1 -35.27
comments: ?TR:7500.000 TE:87

CONVERTED WITH OLD APRIL MRTRIX


Image: “dwitest.nii.gz”


Dimensions: 96 x 96 x 55 x 80
Voxel size: 2.5 x 2.5 x 2.5 x 7.5
Data strides: [ -1 2 3 4 ]
Format: NIfTI-1.1 (GZip compressed)
Data type: signed 16 bit integer (little endian)
Intensity scaling: offset = 0, multiplier = 1
Transform: 1 0 0 -111.3
-0 1 0 -85.89
-0 0 1 -35.27
comments: ?TR:7500.000 TE:87
mrtrix_version: 0.3.14-35-g37587663

CONVERTED IMAGE WITH NEW OCTOBER MRTRIX


Image: “dwitest.nii.gz”


Dimensions: 96 x 96 x 55 x 80
Voxel size: 2.5 x 2.5 x 2.5 x 7.5
Data strides: [ -1 2 3 4 ]
Format: NIfTI-1.1 (GZip compressed)
Data type: signed 16 bit integer (little endian)
Intensity scaling: offset = 0, multiplier = 1
Transform: 1 0 0 -111.3
-0 1 0 -85.89
-0 0 1 -35.27
comments: ?TR:7500.000 TE:87
mrtrix_version: 0.3.15-285-g68589079

Erratum: ITKsnap 3.6 still crashes with any kind of niftii image produced by MRtrix. Their version 3.4 works ok. I have let snap developers know.

OK, good to know and thanks for reporting back. If you have FSL installed, it might be informative to run fslhd on these images, so we can compare the full header as interpreted by FSL. There might be a clue in there…

It happened something similar to us in the past with ITKSnap Beta (3.6 now) and MRtrix. Since then we’re using only stable versions (even really old and tested versions if new ones doesn’t have a really needed feature).

Good luck :slight_smile:

The fslhd difference is in qform (I think ITK ignores sform anyway).

Old MRtrix on Python 2.6.6

qform_name Scanner Anat
qform_code 1
qto_xyz:1 2.500000 0.000000 -0.000000 126.198547
qto_xyz:2 0.000000 2.500000 -0.000000 -85.887390
qto_xyz:3 0.000000 0.000000 -2.500000 -35.267555
qto_xyz:4 0.000000 0.000000 0.000000 1.000000
qform_xorient Left-to-Right
qform_yorient Posterior-to-Anterior
qform_zorient Superior-to-Inferior
sform_name Scanner Anat
sform_code 1
sto_xyz:1 -2.500000 0.000000 0.000000 126.198547
sto_xyz:2 0.000000 2.500000 0.000000 -85.887390
sto_xyz:3 0.000000 0.000000 2.500000 -35.267555
sto_xyz:4 0.000000 0.000000 0.000000 1.000000
sform_xorient Right-to-Left
sform_yorient Posterior-to-Anterior
sform_zorient Inferior-to-Superior
file_type NIFTI-1+
file_code 1
descrip MRtrix version: 0.3.14-35-g37587663
aux_file

New MRtrix on Python 2.7.9

qform_name Scanner Anat
qform_code 1
qto_xyz:1 -2.500000 0.000000 -0.000000 126.198547
qto_xyz:2 0.000000 2.500000 -0.000000 -85.887390
qto_xyz:3 0.000000 0.000000 2.500000 -35.267555
qto_xyz:4 0.000000 0.000000 0.000000 1.000000
qform_xorient Right-to-Left
qform_yorient Posterior-to-Anterior
qform_zorient Inferior-to-Superior
sform_name Scanner Anat
sform_code 1
sto_xyz:1 -2.500000 0.000000 0.000000 126.198547
sto_xyz:2 0.000000 2.500000 0.000000 -85.887390
sto_xyz:3 0.000000 0.000000 2.500000 -35.267555
sto_xyz:4 0.000000 0.000000 0.000000 1.000000
sform_xorient Right-to-Left
sform_yorient Posterior-to-Anterior
sform_zorient Inferior-to-Superior
file_type NIFTI-1+
file_code 1
descrip MRtrix version: 0.3.15-285-g68589079
aux_file

OK, this is most likely related to a bug introduced in the big March update, and subsequently fixed on April 22 related to handling of the NIfTI qform. The current code should be correct and produce the correct qform information. MRtrix3 uses the sform preferentially if present, which is why the output of mrinfo is always consistent - newer versions of FSL will actually refuse to read these data, quite rightly producing an error message to the effect that the sform is not consistent with the qform. Hopefully this is all there is to it, and everything in the current code is correct…(?)

Just for completeness: I can’t see this having anything to do with the version of Python…

Yes, the rest is ok.

One unrelated thing I noticed (to avoid opening a dedicated thread) is that tckgen -nthreads 4 have been slower than tckgen -nthreads 2. It took 24 hours for 4 cores to create 14M streamlines, and around 12 hours for 2 cores to create 16M streamlines. This happened on a cluster and other factors may influence, but I suspect that the high hard disk access required by 4 cores may actually slow down the process. If someone has thoughts, I’m happy to hear them.

This is really unexpected. In my experience, performance scales pretty linearly with number of cores. In fact, in many cases you even get better performance using more threads than you have cores (I think because it mitigates the impact of I/O latency)…

How many cores were available on that node? Were there any other processes running that might have slowed yours down?

Writing the output streamlines is handled by a single thread, so the slowdown is very unlikely to be due to high disk access - if that was a problem, I’d expect throughput to be the same regardless of the number of threads. Besides, in my experience, the rate of data production is relatively low (at least using default parameters), definitely much lower than the ~50MB/s that a reasonable drive should be able to sustain. Another option on a cluster might be network congestion, maybe due to other processes running concurrently? These clusters typically access storage over the network, so I/O might be affected by high network usage on other nodes…

Otherwise, maybe you’re using one of the more exotic seeding strategies? I could envisage that some of them might suffer from bottlenecks due to thread contention, but other than that, I can’t see why this would happen…

Yes, that was surprising to me as well. There are many possible reasons why that happened and is not worth trying all possibilities. If it happens again I will get back.

To summarize, there were 4 cores in my session (the nodes have 16 in total). The seed was -seed_dynamic in both cases. Actually tckgen was killed because the qlogin session expired. So I went back with a 2 core session and ran another tckgen. The greatest thing of MRtrix is that you can kill the process at anytime and still use the tckgen file, or build another one and add it to the previous. That’s what I did, 14M+16M=30M (I guess this is enough for a SIFT to 5M?).

I don’t know exactly at what time my 16M tckgen finished, but I was expecting it to go on for 24 hours at least, and found it done this morning (after ~11-14 hours). A good thing to have is the processing time once any MRtrix command is done (not sure if -verbose gives anything like it).

Hi Dorian,

While there are some peculiarities with dynamic seeding that can affect execution time, I suspect this is more likely to be due to use of a shared computing system. For instance, the fact that the system nodes have 16 cores but you are only using 2 or 4 means that the processing being performed on the other cores (or lack thereof) may affect cache performance. There may also be jobs running on other cores that thrash network storage, slowing other jobs. If you’re able to get a reproducible effect over many runs, let us know.

The greatest thing of MRtrix is that you can kill the process at anytime and still use the tckgen file, or build another one and add it to the previous.

One thing to be aware of is that you sould use the track count as reported by tckinfo, not the number reported on the command-line. The former is a reflection of what has actually been written to the file on disk; the latter is what has been generated, but some number of those streamlines will have been buffered in memory and not yet written to file when the command was killed. Concatenation of track files is also not ideal when using dynamic seeding, but those numbers should be adequately high for repeating the dynamic seeding ‘burn-in’ to not have too much of an effect.

A good thing to have is the processing time once any MRtrix command is done (not sure if -verbose gives anything like it).

Any half-decent HPC scheduling system should provide statistics on job execution times. Alternatively you can use the time command to get the running time of any command or script (not just MRtrix3).

Cheers
Rob