Volume render with signed 16 bit integer data not working

Hey,

I tried to open the endings_segmentations files generated by TractSeg in volume render mode. Opening the file in 2D slices seems to work normally but upon switching to volume render mode the visualization vanishes.

I noticed that after combining begins and ends with mrcalc *_b.nii.gz *_e.nii.gz -max *.mif the volume rendering succeeds.

After investigation with mrinfo I noticed that mrcalc implicitly converts the 16 bit signed integer file exported by TractSeg to a 32 bit float datatype (little endian). After converting all the files explicitely using mrconvert I can open all of them in volume render mode now.

As this does not seem to be wanted behaviour I thought I would report it here.

I’m afraid I can’t reproduce the issue on my system… I don’t have any issues displaying int16 images in volume render mode. Any chance you could send me the problematic file? And/or post the full output of mrinfo on one of them? I’m wondering whether this might relate to the data scaling, or the size of the image.

Also, what system are you running on? Maybe you can post a screenshot of the Help->OpenGL dialog box? I’ve noticed funny issues on a colleague’s MacBook Pro recently that might have something to do with this…

I’m on a MBP with Mac OS 10.15 Catalina. I will send you the file via mail.

************************************************
Image:               "CC_b.nii"
************************************************
  Dimensions:        145 x 174 x 145
  Voxel size:        1.25 x 1.25 x 1.25
  Data strides:      [ -1 2 3 ]
  Format:            NIfTI-1.1
  Data type:         signed 16 bit integer (little endian)
  Intensity scaling: offset = 0, multiplier = 1
  Transform:                    1           0           0         -90
                               -0           1           0        -126
                               -0           0           1         -72

We also always had problems with streamtube visualization of connectome data on Mac.

OK, I can confirm there is indeed a problem, which comes down as far as I can tell to the weird difference in the way the OpenGL standard maps signed & unsigned integers to the [0 1] range. I’m going to need to mull on that one, and it’ll most likely have to wait until at least after ISMRM – sorry…

In the meantime, you can get around it by adding 0.5 to both intensity values, and both transparency values in the View tool.

Can you expand? Might it be related to this issue?