Buchacher, ArendErdt, MariusMatthias Zwicker and Pedro Sander2017-06-192017-06-192017978-3-03868-045-11727-3463https://doi.org/10.2312/sre.20171190https://diglib.eg.org:443/handle/10.2312/sre20171190Stereoscopic rendering of volume data for virtual reality applications is costly, as the computation complexity virtually doubles compared to common monoscopic rendering. This paper presents a single-pass stereoscopic GPU volume ray casting technique which significantly reduces the time needed to produce the second view. The approach builds upon previous work on ray segment re-projection techniques for non-parallel software ray casting that is initially inapplicable to GPU ray casting. Following the previous approach, ray casting is only executed for the left view. At the same time, ray segments are re-projected to layers of a texture array which leverages the constraints of the previous approach. In a subsequent compositing pass the layers are blended to produce the final image. Additionally, ways to determine an appropriate set of parameters are presented. Performance experiments show significant time savings on producing the second view over the naive two-pass approach achieving well over 60% speed-up in a typical virtual reality setup. The trade-off is an overhead of memory consumption that is proportional to the number of layers and image resolution and a marginal reduction in image quality. In qualitative experiments, average DSSIM values of less than 1% were recorded.Single-Pass Stereoscopic GPU Ray Casting Using Re-Projection Layers10.2312/sre.2017119011-18