for an exhibition in the next year in Berlin we would like to use the Looking Glass holographic display. They provide a C++ SDK as closed beta, but we did not hear back from them yet, after we applied.
We think about extending the OpenGL shaders in MRView to enable a 3D holographic visualization of brain models.
The concept of the optical illusion is explained here. I think we can build upon the Volume render mode and generate a quilt by combining different camera angles (aka viewports?) into one image/window/rendering which we then just move over to the separate display.
I suppose that especially @jdtournier might have an idea on where to start playing around with this. Any other help is also very appreciated.
Another way of providing graphical content for the device is via Unity or a closed beta of their Blender SDK.
I’ve no idea how this technology works, but hopefully it’s a simple matter of rendering to stereo buffers…? If so, that’s all possible within OpenGL 3.3. If you need more buffers than that, it gets tricky… but might still be possible, I just don’t know the details of the interface between OpenGL and this type of display.
One big issue here is that we currently render using orthographic projection. For this work, we would need to switch to perspective projection. It’s not a massive deal, but will require fairly deep changes to the code to ensure it’s all handled consistently. I’d be happy to assist as long as someone with a decent amount of OpenGL experience is willing to do the heavy lifting. If it helps, I’d be happy for them to come to London for a few days so we can work on this together – might be the most productive way to getting this done…