4 results
Search Results
Now showing 1 - 4 of 4
Item D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video(The Eurographics Association and John Wiley & Sons Ltd., 2025) Kappel, Moritz; Hahlbohm, Florian; Scholz, Timon; Castillo, Susana; Theobalt, Christian; Eisemann, Martin; Golyanik, Vladislav; Magnor, Marcus; Bousseau, Adrien; Day, AngelaDynamic reconstruction and spatiotemporal novel-view synthesis of non-rigidly deforming scenes recently gained increased attention. While existing work achieves impressive quality and performance on multi-view or teleporting camera setups, most methods fail to efficiently and faithfully recover motion and appearance from casual monocular captures. This paper contributes to the field by introducing a new method for dynamic novel view synthesis from monocular video, such as casual smartphone captures. Our approach represents the scene as a dynamic neural point cloud, an implicit time-conditioned point distribution that encodes local geometry and appearance in separate hash-encoded neural feature grids for static and dynamic regions. By sampling a discrete point cloud from our model, we can efficiently render high-quality novel views using a fast differentiable rasterizer and neural rendering network. Similar to recent work, we leverage advances in neural scene analysis by incorporating data-driven priors like monocular depth estimation and object segmentation to resolve motion and depth ambiguities originating from the monocular captures. In addition to guiding the optimization process, we show that these priors can be exploited to explicitly initialize our scene representation to drastically improve optimization speed and final image quality. As evidenced by our experimental evaluation, our dynamic point cloud model not only enables fast optimization and real-time frame rates for interactive applications, but also achieves competitive image quality on monocular benchmark sequences. Our code and data are available online https://moritzkappel.github.io/projects/dnpc/.Item Efficient Perspective-Correct 3D Gaussian Splatting Using Hybrid Transparency(The Eurographics Association and John Wiley & Sons Ltd., 2025) Hahlbohm, Florian; Friederichs, Fabian; Weyrich, Tim; Franke, Linus; Kappel, Moritz; Castillo, Susana; Stamminger, Marc; Eisemann, Martin; Magnor, Marcus; Bousseau, Adrien; Day, Angela3D Gaussian Splats (3DGS) have proven a versatile rendering primitive, both for inverse rendering as well as real-time exploration of scenes. In these applications, coherence across camera frames and multiple views is crucial, be it for robust convergence of a scene reconstruction or for artifact-free fly-throughs. Recent work started mitigating artifacts that break multi-view coherence, including popping artifacts due to inconsistent transparency sorting and perspective-correct outlines of (2D) splats. At the same time, real-time requirements forced such implementations to accept compromises in how transparency of large assemblies of 3D Gaussians is resolved, in turn breaking coherence in other ways. In our work, we aim at achieving maximum coherence, by rendering fully perspective-correct 3D Gaussians while using a high-quality approximation of accurate blending, hybrid transparency, on a per-pixel level, in order to retain real-time frame rates. Our fast and perspectively accurate approach for evaluation of 3D Gaussians does not require matrix inversions, thereby ensuring numerical stability and eliminating the need for special handling of degenerate splats, and the hybrid transparency formulation for blending maintains similar quality as fully resolved per-pixel transparencies at a fraction of the rendering costs. We further show that each of these two components can be independently integrated into Gaussian splatting systems. In combination, they achieve up to 2× higher frame rates, 2× faster optimization, and equal or better image quality with fewer rendering artifacts compared to traditional 3DGS on common benchmarks.Item Splatshop: Efficiently Editing Large Gaussian Splat Models(The Eurographics Association and John Wiley & Sons Ltd., 2025) Schütz, Markus; Peters, Christoph; Hahlbohm, Florian; Eisemann, Elmar; Magnor, Marcus; Wimmer, Michael; Knoll, Aaron; Peters, ChristophWe present Splatshop, a highly optimized toolbox for interactive editing (selection, deletion, painting, transformation, . . . ) of 3D Gaussian Splatting models. Utilizing a comprehensive collection of heuristic approaches, we carefully balance between exact and fast rendering to enable precise editing without sacrificing real-time performance. Our experiments confirm that Splatshop achieves these goals for scenes with up to 100 million primitives. We also show how our proposed pipeline can be extended for use with head-mounted displays. As such, Splatshop is the first VR-capable editor for large-scale 3D Gaussian Splatting models and a step towards a ''Photoshop for Gaussian Splatting.''Item SPaGS: Fast and Accurate 3D Gaussian Splatting for Spherical Panoramas(The Eurographics Association and John Wiley & Sons Ltd., 2025) Li, Junbo; Hahlbohm, Florian; Scholz, Timon; Eisemann, Martin; Tauscher, Jan-Philipp; Magnor, Marcus; Wang, Beibei; Wilkie, AlexanderIn this paper we propose SPaGS, a high-quality, real-time free-viewpoint rendering approach from 360-degree panoramic images. While existing methods building on Neural Radiance Fields or 3D Gaussian Splatting have difficulties to achieve real-time frame rates and high-quality results at the same time, SPaGS combines the advantages of an explicit 3D Gaussian-based scene representation and ray casting-based rendering to attain fast and accurate results. Central to our new approach is the exact calculation of axis-aligned bounding boxes for spherical images that significantly accelerates omnidirectional ray casting of 3D Gaussians. We also present a new dataset consisting of ten real-world scenes recorded with a drone that incorporates both calibrated 360-degree panoramic images as well as perspective images captured simultaneously, i.e., with the same flight trajectory. Our evaluation on this new dataset as well as established benchmarks demonstrates that SPaGS excels over state-of-the-art methods in terms of both rendering quality and speed.