Search Results

Now showing 1 - 10 of 10
  • Item
    Efficient Perspective-Correct 3D Gaussian Splatting Using Hybrid Transparency
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Hahlbohm, Florian; Friederichs, Fabian; Weyrich, Tim; Franke, Linus; Kappel, Moritz; Castillo, Susana; Stamminger, Marc; Eisemann, Martin; Magnor, Marcus; Bousseau, Adrien; Day, Angela
    3D Gaussian Splats (3DGS) have proven a versatile rendering primitive, both for inverse rendering as well as real-time exploration of scenes. In these applications, coherence across camera frames and multiple views is crucial, be it for robust convergence of a scene reconstruction or for artifact-free fly-throughs. Recent work started mitigating artifacts that break multi-view coherence, including popping artifacts due to inconsistent transparency sorting and perspective-correct outlines of (2D) splats. At the same time, real-time requirements forced such implementations to accept compromises in how transparency of large assemblies of 3D Gaussians is resolved, in turn breaking coherence in other ways. In our work, we aim at achieving maximum coherence, by rendering fully perspective-correct 3D Gaussians while using a high-quality approximation of accurate blending, hybrid transparency, on a per-pixel level, in order to retain real-time frame rates. Our fast and perspectively accurate approach for evaluation of 3D Gaussians does not require matrix inversions, thereby ensuring numerical stability and eliminating the need for special handling of degenerate splats, and the hybrid transparency formulation for blending maintains similar quality as fully resolved per-pixel transparencies at a fraction of the rendering costs. We further show that each of these two components can be independently integrated into Gaussian splatting systems. In combination, they achieve up to 2× higher frame rates, 2× faster optimization, and equal or better image quality with fewer rendering artifacts compared to traditional 3DGS on common benchmarks.
  • Item
    Many-Light Rendering Using ReSTIR-Sampled Shadow Maps
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Zhang, Song; Lin, Daqi; Wyman, Chris; Yuksel, Cem; Bousseau, Adrien; Day, Angela
    We present a practical method targeting dynamic shadow maps for many light sources in real-time rendering. We compute fullresolution shadow maps for a subset of lights, which we select with spatiotemporal reservoir resampling (ReSTIR). Our selection strategy automatically regenerates shadow maps for lights with the strongest contributions to pixels in the current camera view. The remaining lights are handled using imperfect shadow maps, which provide low-resolution shadow approximation. We significantly reduce the computation and storage compared to using all full-resolution shadow maps and substantially improve shadow quality compared to handling all lights with imperfect shadow maps.
  • Item
    Fast Sphere Tracing of Procedural Volumetric Noise for very Large and Detailed Scenes
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Moinet, Mathéo; Neyret, Fabrice; Bousseau, Adrien; Day, Angela
    Real-time walk through very large and detailed scenes is a challenge for both content design, data management, and rendering, and requires LOD to handle the scale range. In the case of partly stochastic content (clouds, cosmic dust, fire, terrains, etc.), proceduralism allows arbitrary large and detailed scenes with no or little storage and offers embedded LOD, but the rendering gets even costlier. In this paper, we propose to boost the performance of Fractional Brownian Motion (FBM)-based noise rendering (e.g., 3D Perlin noise, hypertextures) in two ways: improving the stepping efficiency of Sphere Tracing of general Signed Distance Functions (SDF) considering the first and second derivatives, and treating cascaded sums such as FBM as nested bounding volumes. We illustrate this on various scenes made of either opaque material, constant semi-transparent material, or non-constant (i.e., full volumetric inside) material, including animated content - thanks to on-the-fly proceduralism. We obtain real-time performances with speedups up to 12-folds on opaque or constant semi-transparent scenes compared to classical Sphere tracing, and up to 2-folds (through empty space skipping optimization) on non-constant density volumetric scenes.
  • Item
    FastAtlas: Real-Time Compact Atlases for Texture Space Shading
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Vining, Nicholas; Majercik, Zander; Gu, Floria; Takikawa, Towaki; Trusty, Ty; Lalonde, Paul; McGuire, Morgan; Sheffer, Alla; Bousseau, Adrien; Day, Angela
    Texture-space shading (TSS) methods decouple shading and rasterization, allowing shading to be performed at a different framerate and spatial resolution than rasterization. TSS has many potential applications, including streaming shading across networks, and reducing rendering cost via shading reuse across consecutive frames and/or shading at reduced resolutions relative to display resolution. Real-time TSS shading requires texture atlases small enough to be easily stored in GPU memory. Using static atlases leads to significant space wastage, motivating real-time per-frame atlassing strategies that pack only the content visible in each frame. We propose FastAtlas, a novel atlasing method that runs entirely on the GPU and is fast enough to be performed at interactive rates per-frame. Our method combines new per-frame chart computation and parametrization strategies and an efficient general chart packing algorithm. Our chartification strategy removes visible seams in output renders, and our parameterization ensures a constant texel-to-pixel ratio, avoiding undesirable undersampling artifacts. Our packing method is more general, and produces more tightly packed atlases, than previous work. Jointly, these innovations enable us to produce shading outputs of significantly higher visual quality than those produced using alternative atlasing strategies. We validate FastAtlas by shading and rendering challenging scenes using different atlasing settings, reflecting the needs of different TSS applications (temporal reuse, streaming, reduced or elevated shading rates). We extensively compare FastAtlas to prior alternatives and demonstrate that it achieves better shading quality and reduces texture stretch compared to prior approaches using the same settings.
  • Item
    Real-time Neural Rendering of LiDAR Point Clouds
    (The Eurographics Association, 2025) VANHERCK, Joni; Zoomers, Brent; Mertens, Tom; Jorissen, Lode; Michiels, Nick; Ceylan, Duygu; Li, Tzu-Mao
    Static LiDAR scanners produce accurate, dense, colored point clouds, but often contain obtrusive artifacts which makes them ill-suited for direct display. We propose an efficient method to render more perceptually realistic images of such scans without any expensive preprocessing or training of a scene-specific model. A naive projection of the point cloud to the output view using 1×1 pixels is fast and retains the available detail, but also results in unintelligible renderings as background points leak between the foreground pixels. The key insight is that these projections can be transformed into a more realistic result using a deep convolutional model in the form of a U-Net, and a depth-based heuristic that prefilters the data. The U-Net also handles LiDAR-specific problems such as missing parts due to occlusion, color inconsistencies and varying point densities. We also describe a method to generate synthetic training data to deal with imperfectly-aligned ground truth images. Our method achieves real-time rendering rates using an off-the-shelf GPU and outperforms the state-of-the-art in both speed and quality.
  • Item
    Importance Sampling of BCSDF Derivatives
    (The Eurographics Association, 2025) Wang, Lei; Iwasaki, Kei; Ceylan, Duygu; Li, Tzu-Mao
    Differentiable rendering requires the development of importance sampling for derivative functions with respect to the parameters. While importance sampling for Bidirectional Reflectance Distribution Function derivative has been proposed in recent years, no methods have been introduced for the derivatives of Bidirectional Curve Scattering Distribution Function (BCSDF). To bridge this gap, we propose an importance sampling method for the derivatives of the BCSDF using positivization [BXB∗24]. Our BCSDF derivative importance sampling method achieves up to 94% reduction in RMSE for eqaul-time rendering.
  • Item
    Sampling of Anisotropic Spatial Gaussians for Path Guiding
    (The Eurographics Association, 2025) Lelyakin, Sergey; Schüßler, Vincent; Dachsbacher, Carsten; Günther, Tobias; Montazeri, Zahra
    Directional models in path guiding struggle with representing parallax effects or anisotropic features. Our model instead describes the spatial distribution of a target vertex using a 3D Gaussian mixture model. While this dispenses with the need for reprojection and allows to represent anisotropic features easily, its directional probability density is not readily available, since it involves a marginal integral. In this work, we derive an expression for the PDF of our model in solid angle measure that is practical to evaluate. We demonstrate how our model can improve guiding accuracy in various scenes.
  • Item
    NOVA-3DGS: No-reference Objective VAlidation for 3D Gaussian Splatting
    (The Eurographics Association, 2025) Piras, Valentina; Bonatti, Amedeo Franco; Maria, Carmelo De; Cignoni, Paolo; Banterle, Francesco; Günther, Tobias; Montazeri, Zahra
    In recent years, radiance field methods, and in particular 3D Gaussian Splatting (3DGS), have distinguished themselves in the field of image-based rendering and scene reconstruction techniques, gaining significant success in academia and being cited in numerous research papers. Like other methods, 3DGS requires a large and diverse dataset of images for network training as a fundamental step to ensure effectiveness and high-quality results. Consequently, the acquisition phase is highly time-consuming, especially considering that a portion of the acquired dataset is not actually used for training but is reserved for testing. This is necessary because all commonly used metrics for evaluating the quality of 3D reconstructions, such as PSNR and SSIM, are reference-based metrics; i.e., requiring a ground truth. In this work, we present NOVA, a study focused on no-reference evaluation of 3DGS renders, based on key metrics in this field: PSNR and SSIM.
  • Item
    Real-Time Rendering Framework for Holography
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Fricke, Sascha; Castillo, Susana; Eisemann, Martin; Magnor, Marcus; Bousseau, Adrien; Day, Angela
    With the advent of holographic near-eye displays, the need for rendering algorithms that output holograms instead of color images emerged. These holograms usually encode phase maps that alter the phase of coherent light sources such that images result from diffraction effects. While common approaches rely on translating the output of traditional rendering systems to holograms in a post processing step, we instead developed a rendering system that can directly output a phase map to a Spatial Light Modulator (SLM). Our hardware-ray-traced sparse point distribution, and depth mapping enable rapid hologram generation, allowing for highquality time-multiplexed holography for real-time content. Additionally, our system is compatible with foveated rendering which enables further performance optimizations.
  • Item
    Learning Fast 3D Gaussian Splatting Rendering using Continuous Level of Detail
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Milef, Nicholas; Seyb, Dario; Keeler, Todd; Nguyen-Phuoc, Thu; Bozic, Aljaz; Kondguli, Sushant; Marshall, Carl; Bousseau, Adrien; Day, Angela
    3D Gaussian splatting (3DGS) has shown potential for rendering photorealistic 3D scenes in real-time. Unfortunately, rendering these scenes on less powerful hardware is still a challenge, especially with high-resolution displays. We introduce a continuous level of detail (CLOD) algorithm and demonstrate how our method can improve performance while preserving as much quality as possible. Our approach learns to order splats based on importance and optimize them such that a representative and realistic scene can be rendered for an arbitrary splat count. Our method does not require any additional memory or rendering overhead and works with existing 3DGS renderers. We also demonstrate the flexibility of our CLOD method by extending it with distance-based LOD selection, foveated rendering, and budget-based rendering.