VMV2023
Permanent URI for this collection
Browse
Browsing VMV2023 by Author "Eisemann, Martin"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Optimizing Temporal Stability in Underwater Video Tone Mapping(The Eurographics Association, 2023) Franz, Matthias; Thang, B. Matthias; Sackhoff, Pascal; Scholz, Timon; Möller, Jannis; Grogorick, Steve; Eisemann, Martin; Guthe, Michael; Grosch, ThorstenIn this paper, we present an approach for temporal stabilization of depth-based underwater image tone mapping methods for application to monocular RGB video. Typically, the goal is to improve the colors of focused objects, while leaving more distant regions nearly unchanged, to preserve the underwater look-and-feel of the overall image. To do this, many methods rely on estimated depth to control the recolorization process, i.e., to enhance colors (reduce blue tint) only for objects close to the camera. However, while single-view depth estimation is usually consistent within a frame, it often suffers from inconsistencies across sequential frames, resulting in color fluctuations during tone mapping. We propose a simple yet effective inter-frame stabilization of the computed depth maps to achieve stable tone mapping results. The evaluation of eight test sequences shows the effectiveness in a wide range of underwater scenarios.Item PlenopticPoints: Rasterizing Neural Feature Points for High-Quality Novel View Synthesis(The Eurographics Association, 2023) Hahlbohm, Florian; Kappel, Moritz; Tauscher, Jan-Philipp; Eisemann, Martin; Magnor, Marcus; Guthe, Michael; Grosch, ThorstenThis paper presents a point-based, neural rendering approach for complex real-world objects from a set of photographs. Our method is specifically geared towards representing fine detail and reflective surface characteristics at improved quality over current state-of-the-art methods. From the photographs, we create a 3D point model based on optimized neural feature points located on a regular grid. For rendering, we employ view-dependent spherical harmonics shading, differentiable rasterization, and a deep neural rendering network. By combining a point-based approach and novel regularizers, our method is able to accurately represent local detail such as fine geometry and high-frequency texture while at the same time convincingly interpolating unseen viewpoints during inference. Our method achieves about 7 frames per second at 800×800 pixel output resolution on commodity hardware, putting it within reach for real-time rendering applications.