40-Issue 4
Permanent URI for this collection
Browse
Browsing 40-Issue 4 by Author "Drettakis, George"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Point-Based Neural Rendering with Per-View Optimization(The Eurographics Association and John Wiley & Sons Ltd., 2021) Kopanas, Georgios; Philip, Julien; Leimkühler, Thomas; Drettakis, George; Bousseau, Adrien and McGuire, MorganThere has recently been great interest in neural rendering methods. Some approaches use 3D geometry reconstructed with Multi-View Stereo (MVS) but cannot recover from the errors of this process, while others directly learn a volumetric neural representation, but suffer from expensive training and inference. We introduce a general approach that is initialized with MVS, but allows further optimization of scene properties in the space of input views, including depth and reprojected features, resulting in improved novel-view synthesis. A key element of our approach is our new differentiable point-based pipeline, based on bi-directional Elliptical Weighted Average splatting, a probabilistic depth test and effective camera selection. We use these elements together in our neural renderer, that outperforms all previous methods both in quality and speed in almost all scenes we tested. Our pipeline can be applied to multi-view harmonization and stylization in addition to novel-view synthesis.Item Video-Based Rendering of Dynamic Stationary Environments from Unsynchronized Inputs(The Eurographics Association and John Wiley & Sons Ltd., 2021) Thonat, Theo; Aksoy, Yagiz; Aittala, Miika; Paris, Sylvain; Durand, Fredo; Drettakis, George; Bousseau, Adrien and McGuire, MorganImage-Based Rendering allows users to easily capture a scene using a single camera and then navigate freely with realistic results. However, the resulting renderings are completely static, and dynamic effects - such as fire, waterfalls or small waves - cannot be reproduced. We tackle the challenging problem of enabling free-viewpoint navigation including such stationary dynamic effects, but still maintaining the simplicity of casual capture. Using a single camera - instead of previous complex synchronized multi-camera setups - means that we have unsynchronized videos of the dynamic effect from multiple views, making it hard to blend them when synthesizing novel views. We present a solution that allows smooth free-viewpoint video-based rendering (VBR) of such scenes using temporal Laplacian pyramid decomposition video, enabling spatio-temporal blending. For effects such as fire and waterfalls, that are semi-transparent and occupy 3D space, we first estimate their spatial volume. This allows us to create per-video geometries and alpha-matte videos that we can blend using our frequency-dependent method. We also extend Laplacian blending to the temporal dimension to remove additional temporal seams. We show results on scenes containing fire, waterfalls or rippling waves at the seaside, bringing these scenes to life.