Lochmann, GerritReinert, BernhardBuchacher, ArendRitschel, TobiasMatthias Hullin and Marc Stamminger and Tino Weinkauf2016-10-102016-10-102016978-3-03868-025-3-https://doi.org/10.2312/vmv.20161346https://diglib.eg.org:443/handle/10.2312/vmv20161346Novel-view synthesis can be used to hide latency in a real-time remote rendering setup, to increase frame rate or to produce advanced visual effects such as depth-of-field or motion blur in volumes or stereo and light field imagery. Regrettably, existing real-time solutions are limited to opaque surfaces. Prior art has circumvented the challenge by making volumes opaque i. e., projecting the volume onto representative surfaces for reprojection, omitting correct volumetric effects. This paper proposes a layered image representation which is re-composed for the novel view with a special reconstruction filter. We propose a view-dependent approximation to the volume allowing to produce a typical novel view of 1024 1024 pixels in ca. 25 ms on a current GPU. At the heart of our approach is the idea to compress the complex view-dependent emission-absorption function along original view rays into a layered piecewise-analytic emission-absorption representation that can be efficiently ray-cast from a novel view. It does not assume opaque surfaces or approximate color and opacity, can be re-evaluated very efficiently, results in an image identical to the reference from the original view, has correct volumetric shading for novel views and works on a low and fixed number of layers per pixel that fits modern GPU architectures.Real-time Novel-view Synthesis for Volume Rendering Using a Piecewise-analytic Representation10.2312/vmv.2016134685-92