Zellmann, StefanFrosini, Patrizio and Giorgi, Daniela and Melzi, Simone and RodolĂ , Emanuele2021-10-252021-10-252021978-3-03868-165-62617-4855https://doi.org/10.2312/stag.20211479https://diglib.eg.org:443/handle/10.2312/stag20211479We propose an image warping-based remote rendering technique for volumes that decouples the rendering and display phases. For that we build on prior work where we sample the volume on the client using ray casting and reconstruct z-values based on heuristics. Color and depth buffers are then sent to the client, which reuses this depth image as a stand-in for subsequent frames by warping it to reflect the current camera position and orientation until new data was received from the server. The extension we propose in this work represents the depth pixels as spheres and ray traces them on the client side. In contrast to the reference method, this representation adapts the footprint of the depth pixels to the distance to the camera origin, which is more effective at hiding warping artifacts, particularly when applied to volumetric data sets.Human centered computingVisualization techniquesScientific visualizationComputing methodologiesRay tracingGraphics processorsRemote Volume Rendering with a Decoupled, Ray-Traced Display Phase10.2312/stag.2021147997-101