Tck to vtk transform error

Dear Mrtrix experts,

I am trying to convert my .tck file to .vtk, using tckconvert, but when I import my results in slicer, orientation of the tracts looks off from the t1 image, I searched issue on slicer and mrtrix forum, but could not find the any useful info.


Screenshot from 2022-06-16 14-25-03

while the results in mrtrix looks like below.

Hello @jdtournier @rsmith

Any suggestions how can I solve this error?

Hi @nayan_wadhwani,

Apologies, I’m not familiar enough with the specific conventions assumed when storing streamlines in VTK format to be able to give you a definite answer, but I can probably give you some pointers as to what might be going on.

In TCK, the position of each vertex (a 3D point along the streamline) is stored relative to the real / world / scanner coordinate system (essentially the XYZ axes of the scanner). In other formats, there’s a good chance the positions are stored in voxel coordinates relative to the axes of the diffusion MRI dataset they were generated from – I expect this might be the case with the VTK format (though I’m struggling to find concrete documentation on this). This means that to figure out the coordinates of each vertex in the VTK file, you need to provide a reference image whose metadata provides the information required to work out the mapping between scanner and voxel coordinates (typically the voxel sizes, image transform, image dimensions). This information would need to be provided to the tckconvert call, most likely via the -scanner2voxel option¹.

This might be enough to fix the issue, but you then have the corresponding problem when loading the data into other packages like Slicer. I’m not familiar with this package, so I’m not sure how it handles data of this nature. But to be able to properly handle the VTK data, it would also need to know which coordinate system to assume when reading the data. I don’t think the VTK file contains any information about this (at least, the VTK files produced by tckconvert don’t), so the reference image would also need to be provided to Slicer for it to be able to work out where to put each vertex in real / scanner / world space (assuming that’s the coordinate system it uses internally, which might not be the case).

The situation is likely to get more complicated again if you’re trying to overlay the streamlines generated from a diffusion MRI dataset (with its own orientation, voxel size, and dimensions) on top of a different image, such as the anatomical (which will have its own different orientation, voxel size and dimensions). Again, it depends on how Slicer operates, but it could be that it then needs to figure out the transformation from the diffusion image to world coordinates, and then from world coordinates to the anatomical image. I have no idea how it handles this situation… Maybe it’s sufficient to provide the anatomical image to tckconvert -scanner2voxel, and then everything works out once in Slicer, but that would need testing.

Note that we avoid these issues in MRtrix by explicitly storing everything in world coordinates – no need for an external reference image, it’s completely unambiguous. This was an early design decision that I think has proved its worth…

Hope this helps,
Donald


¹though possibly using the -scanner2image option if vertices are stored in the image frame, but in millimeter units rather than voxel units – again, I can’t find solid documentation on this…

Dear @jdtournier ,

Thank you for the wonderful explanation, I tried all the solutions that was suggested by you and tried all possible scenarios to convert tck to vtk, but there is no success, but did find one thread in slicer community in which one of the users do have converted ithttps://discourse.slicer.org/t/vtk-fiber-bundle-import-from-mrtrix-and-or-good-format-description-of-the-slicer-polydata-fiber-bundle-format/3846, but he has not shared the command that he used, my command for conversion is
tckconvert -scanner2image/image2scanner/scanner2voxel t1.nii.gz cst.tck cst.vtk
and none of this options worked out.
In the tckconvert help - I can see two scanner2voxel options, I suppose both are same, since both have identical flags and spell.


Is there anything else that I should try, My end goal is to convert tck to dicom, let me know if you have any suggestions in regards to that.

Thanks,
Nayan

OK, I’ve tried to get to the bottom of this, and I think I’ve finally worked it out – though I have to admit documentation on this is surprisingly thin.

I took me a (long) while to figure out how to do the most basic things, but I eventually figured out that Slicer does assume a world coordinate system (though they call it the anatomical coordinate system), which does coincide with the one we use in MRtrix. So in principle, tckconvert should work out of the box with no additional options. However, they made a change a couple of years ago, which means that all models are now interpreted as being stored in a different coordinate system (the same one as the DICOM standard)… This means the x & y components of the streamline vertices need to be inverted compared to our storage – and unfortunately no amount of -scanner2voxel shenanigans is going to achieve that (at least not easily).

Thankfully, there’s an option when you load the data file into Slicer that allows you to specify the coordinate system (you need to tick the ‘show options’ checkbox, and then select ‘RAS’):

With that, I was able to load a tractogram converted to VTK using tckconvert with no additional options, and it seems to match what mrview shows:


In the long run, we may need to modify the way we interpret VTK files if the LPS coordinate system is a recognised standard – what do you reckon, @blezek?

Sorry, couple more points:

No idea what happened there, but I only one of those options on my system – and there’s no indication in the history that there were ever two of them. Maybe this was a glitch in your terminal display…?

I think you’ll find that Karawun is exactly what you’re looking for…

@jdtournier Thank you for your reply, that was really a dig down from your side, I did knew that slicer inputs the image in RAS coordinate system, but I was unaware of the option that you showed while loading the data. That was really helpful,
I did tried Karawun, I was able to convert the tracts into dicom, but that file I am unable to view in any of the dicom viewer, such as osirix or Horos, or Slicer, it’s a empty file according to these software and I do not have a brainlab navigation system to check whether it has been properly converted or not, we have our know navigation system, that we have built in house, in that it’s not working. So whole idea would be to convert these into vtk → dicom and load the images into PACS for intra-op planning, if and all that works.
Please let know, if you any other suggestions to achieve this goal. I assume you must be having few ideas,
Above solution is extremely helpful.

@jdtournier seems every few years I have to work through DICOM, VTK, Slicer and mrtrix coordinate systems… :face_with_diagonal_mouth:

As you already discovered, VTK polyline files do not define any coordinate systems, the points are simply stored as x,y,z. This was an intentional decision by the VTK team to leave interpretation up to the application developers. This makes .vtk files very flexible and they have been used for all sorts of scientific visualization.

This could be done, and would be sensible if the VTK files were only used within Slicer’s LPS coordinate system. I also load up geometry in ParaView which does not interpret points in LPS, but rather in XYZ. Slicer made the decision to standardize on an LPS coordinate system, and interprets the .vtk files accordingly. As demonstrated by the OP, this can lead to possibly catastrophic laterality errors! Generally, rendering software have a “RAS”-like coordinate system (counter example is RenderMan that uses a left-handed “RAS”-like coordinate system, or perhaps “LAS”-like coordinate system :grimacing:).

My recommendation would be to keep the mrconvert coordinate system agnostic approach in place, but add an option to export / import in LPS such that files correctly load in Slicer, and give better interoperability with packages like dipy.

I’ll be in the mrconvert code soon and can add this option, I think it would be a good addition.

1 Like

I was not familiar with Karawun, looks promising! We have BrainLab, so I’ll give it a try today – would be pretty amazing if it does work.

@nayan_wadhwani I wouldn’t be surprised that Horos, Slicer or OsiriX can’t load the DICOM. BrainLab uses a DICOM extension for streamlines that (I’m pretty sure) no other software implements. I’d be tempted to add it to MRtrix but without a proper DICOM library (like DCMTK), it’s very complicated to write the DICOM correctly.

RE: vtkDICOM, I’ve been very tempted to write a utility in MRtrix to render a .tck file into a .mr volume similar to the current ability to write a track density volume. However, this quickly becomes highly complicated, akin to writing a rendering engine because you have to deal with orientations, sampling, color, shading, etc. I did experiment with several hair rendering packages (including Blender, PBRT, LuxCore and RenderMan), but they only solves part of the problem, i.e. rendering the streamlines. Compositing over DICOM correctly is also tricky because you have to window/level the image first.

I feel your pain…

OK, that’s reason enough to leave things as-is – and for me to never use VTK as a means of streamline storage, since as you say:

The last thing I want is for MRtrix to be used in any pipeline that results in the wrong side of someone’s brain being operated on. We had enough anguish over the Analyze format, I’m not desperate for the sequel…


I’ll leave that up to you. From my point of view, the lack of a standard for the coordinate system is enough for me to question whether we should even provide such a converter as part of the official release… I reckon the removal of support for VTK conversion would probably be classed as a regression, but we’ve been talking about providing additional not-officially-supported extensions via separate channels for a while now, and this would be a prime candidate, in my opinion (along with Bruker conversion…).

OK, as @blezek mentioned, the issue is likely not that the conversion failed, but that the specific packages you’re using don’t actually support the newer DICOM tractography standard. It sounds like what you’re looking for is to export the tracks as a regular image, which you could do with a call to tckmap to produce an image (potentially with colour-mapping) before converting that image to DICOM – which isn’t the same thing as storing the streamlines natively as DICOM objects.

This is something we could potentially explore (writing arbitrary images to DICOM), though it’s not all that trivial to ensure all the required DICOM tags are present and consistent. But there is demand for this functionality from other quarters, so we may have to do something about it.

@blezek Once you try it into Brainlab, do share you command that you used(so that I can verify, whether I have converted it correctly) and photos for the planning, so that I can try it in some other surgeons navigation system. I have been struggling to convert tracts to dicom, as they are instead of converting them to image format first, because tracts like this, give much more clearer picture to surgeon, than that of TDI image of the tracts. It will be great if you can write this utility of converting tck into mr, it will be extremely helpful for users like me, which uses mrtrix for more a clinical perspective lesser on a search perspective. @blezek do you know any rendering software or packages which supports tck format directly?

The reason I am not using tckmap is, sometimes it does not give a clearer picture as much as tck file does, may I might be using it in wrong way, but I get below images when I generate TDI with tckmap, noting that I have not applied sift2 to my tracts.

I do not know of any rendering package that supports .tck files directly. A utility like tckmap (or an extension) to “render” the tracts would be extremely useful for me as well, but it is a big task.

The convert command was

importTractography \
  -d DICOM/Ax_MPRAGE-2/1.3.12.2.1107.5.2.43.166012.2018090506561097694411581.dcm \
  -o output-brainlab \
  -n t1.nii.gz \
  -t tracts/af.tck tracts/cst.tck tracts/ifof.tck tracts/ilf.tck tracts/or.tck tracts/slf.tck tracts/uf.tck

Once imported into BrainLab I had to find the correct MR image and streamline objects. The registration is incorrect because the streamlines were registered to the FLAIR image, not the MPRAGE. Also, we are doing this as an exploratory research project, not for clinical applications.

If you’re not using tckmap to generate an image of the streamlines, but you’re also not using a system that can import the streamlines directly (using the newer DICOM standard), then I’m not sure how you’re hoping to convert the tck to dicom…?

In any event, I reckon you can get pretty close if you use tckmap -dec -stat_vox mean. This is what this looks like in mrview:

original tck:

DEC tckmap at T1 resolution:

tckmap cst.tck -template T1.mif -dec -stat_vox mean out.mif

DEC tckmap at higher resolution:

tckmap cst.tck -template T1.mif -dec -stat_vox mean -vox 0.5 out.mif

Hopefully one of these might do the trick…?

Of these, the first tckmap call will generate an RGB image on the same voxel grid as the template T1, which will likely be a lot easier to export to DICOM and register to the T1 than the higher resolution one… But that all assumes we can convert to DICOM, which isn’t a capability we currently have. I have a feeling you can use Karawun for that step, though. Worth a try…

Dear @jdtournier ,

I had a plan that, slicer has a compatibility to convert the vtk to dicom, in the subject space, I would convert tck → vtk → dicom and test. But I will definitely try tckmap and experiment it with more. Karawun looks an excellent solution to problem, testing would give a better perspective. Your tckmap results looks brilliant will try to replicate it, if I can.

@blezek Your results look amazing, I would definitely try it, I too used the same command as yours. So my results are okay, I just need to try it in the brainlab navigation system.