Hi, is someone here experienced with the use of mrtrix, resp. mrview in combination with http://openigtlink.org?
That’s quite specific experience you’re looking for. It looks like a pretty impressive framework though, with a strong team of people behind it; you might have more luck asking them if they’ve got experience with using MRtrix3 in any specific way combined with their framework. In general, it sounds like they can take most general image data, so in principle, I think all MRtrix3 outputs that come as an image would probably be compatible (but then exported as e.g. a NIfTI image). Tractography data could up to some extent be exported as a track density image, potentially thresholded depending on the setting and application. It looks like the framework may also take tractography directly, but maybe in a specific format.
Most MRtrix3 tools aren’t all that realtime though, so direct interaction in such a setting may not always be possible (but in surgery, most actual processing/planning is carefully considered beforehand, on a case-by-case basis).
Maybe you could clarify what you’re (more) specifically after; I reckon that’d give you better chances of people chipping in with helpful answers…?
thanks for your answer – sorry for my poorly expressed question.
We would like to provide probabilistic tractography (or multiple tractography results via different tracto algorithms) in the OR.
For that, we had the idea to use Brainlab’s intra-op Navigation setting in connection to MRrtrix3 / mrview.
It seems that an intra-op navigation setting has already been used in connection to 3D Slicer (3D Slicer and Brainlab).
So the specific goal would be to stream real-time tracking data - coming from Brainlab’s navigation and feed it to mrview.
Furthermore, I should ask decently if you guys would be fine with that idea of use of MRtrix3 after all.
Oh yes, thanks for the tip, I should ask the people behind openigtlink as well…
Looking forward to Paris!
Just to be clear, since the terminology between the two fields can get muddled: When you say “tracking”, are you referring to receiving positional data from the navigation software and using that to specify e.g. the focus position in 3D space within
mrview? So purely a receive-only functionality in MRtrix3?
While this is certainly possible, and it looks like their framework and documentation are pretty good, it’s not likely to take priority for any developers at our end. We could however provide advice to anyone trying to pursue this (we are open-source, after all).
Yes, exactly, only receiving positional data.
Thanks for your answer, I’ll speak to computer scientists connected to our institution and see how things work out.
Hi Rob, hi Lucius,
we have a very similar approach in mind (receiving and visualizing positional data from the navigation software), so we are thinking about integrating the OpenIGTLink library with mrtrix. Could you maybe roughly point out how you would pursue this ? That would be very helpful, as we don’t have any experience with the underlying codebase of mrtrix just yet.
@Lucius: may I ask if you actually followed through with your approach ?
Hi, unfortunately we’re still looking for someone who’s interested in programming it… Maybe we can join forces?
Sorry about the delay, I’ve had a lot on recently…
I’ve not heard of the OpenIGTLink library so far, and I’m not sure what it does… But it strikes me that much of what you might be after might fall under the remit of a feature that we’ve been talking about on & off for a very long time: synchronisation between MRView instances. @Lee_Reid has now grabbed the bull by the horns, and submitted a pull request for this feature. I’ve since gone and put multiple spokes in his wheels (sorry… ), but the plan going forward would likely be a lightweight protocol based on UDP broadcasts. This ought to be trivial to interface with from an external application, all that would be needed would be a small app receiving updates from your workstation, and forwarding them on to the viewer via its own protocol. I recommend we discuss this with @Lee_Reid over on the Github pull request.