Search Results

Now showing 1 - 10 of 14
  • Item
    Efficient Perspective-Correct 3D Gaussian Splatting Using Hybrid Transparency
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Hahlbohm, Florian; Friederichs, Fabian; Weyrich, Tim; Franke, Linus; Kappel, Moritz; Castillo, Susana; Stamminger, Marc; Eisemann, Martin; Magnor, Marcus; Bousseau, Adrien; Day, Angela
    3D Gaussian Splats (3DGS) have proven a versatile rendering primitive, both for inverse rendering as well as real-time exploration of scenes. In these applications, coherence across camera frames and multiple views is crucial, be it for robust convergence of a scene reconstruction or for artifact-free fly-throughs. Recent work started mitigating artifacts that break multi-view coherence, including popping artifacts due to inconsistent transparency sorting and perspective-correct outlines of (2D) splats. At the same time, real-time requirements forced such implementations to accept compromises in how transparency of large assemblies of 3D Gaussians is resolved, in turn breaking coherence in other ways. In our work, we aim at achieving maximum coherence, by rendering fully perspective-correct 3D Gaussians while using a high-quality approximation of accurate blending, hybrid transparency, on a per-pixel level, in order to retain real-time frame rates. Our fast and perspectively accurate approach for evaluation of 3D Gaussians does not require matrix inversions, thereby ensuring numerical stability and eliminating the need for special handling of degenerate splats, and the hybrid transparency formulation for blending maintains similar quality as fully resolved per-pixel transparencies at a fraction of the rendering costs. We further show that each of these two components can be independently integrated into Gaussian splatting systems. In combination, they achieve up to 2× higher frame rates, 2× faster optimization, and equal or better image quality with fewer rendering artifacts compared to traditional 3DGS on common benchmarks.
  • Item
    Many-Light Rendering Using ReSTIR-Sampled Shadow Maps
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Zhang, Song; Lin, Daqi; Wyman, Chris; Yuksel, Cem; Bousseau, Adrien; Day, Angela
    We present a practical method targeting dynamic shadow maps for many light sources in real-time rendering. We compute fullresolution shadow maps for a subset of lights, which we select with spatiotemporal reservoir resampling (ReSTIR). Our selection strategy automatically regenerates shadow maps for lights with the strongest contributions to pixels in the current camera view. The remaining lights are handled using imperfect shadow maps, which provide low-resolution shadow approximation. We significantly reduce the computation and storage compared to using all full-resolution shadow maps and substantially improve shadow quality compared to handling all lights with imperfect shadow maps.
  • Item
    Fast Sphere Tracing of Procedural Volumetric Noise for very Large and Detailed Scenes
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Moinet, Mathéo; Neyret, Fabrice; Bousseau, Adrien; Day, Angela
    Real-time walk through very large and detailed scenes is a challenge for both content design, data management, and rendering, and requires LOD to handle the scale range. In the case of partly stochastic content (clouds, cosmic dust, fire, terrains, etc.), proceduralism allows arbitrary large and detailed scenes with no or little storage and offers embedded LOD, but the rendering gets even costlier. In this paper, we propose to boost the performance of Fractional Brownian Motion (FBM)-based noise rendering (e.g., 3D Perlin noise, hypertextures) in two ways: improving the stepping efficiency of Sphere Tracing of general Signed Distance Functions (SDF) considering the first and second derivatives, and treating cascaded sums such as FBM as nested bounding volumes. We illustrate this on various scenes made of either opaque material, constant semi-transparent material, or non-constant (i.e., full volumetric inside) material, including animated content - thanks to on-the-fly proceduralism. We obtain real-time performances with speedups up to 12-folds on opaque or constant semi-transparent scenes compared to classical Sphere tracing, and up to 2-folds (through empty space skipping optimization) on non-constant density volumetric scenes.
  • Item
    FastAtlas: Real-Time Compact Atlases for Texture Space Shading
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Vining, Nicholas; Majercik, Zander; Gu, Floria; Takikawa, Towaki; Trusty, Ty; Lalonde, Paul; McGuire, Morgan; Sheffer, Alla; Bousseau, Adrien; Day, Angela
    Texture-space shading (TSS) methods decouple shading and rasterization, allowing shading to be performed at a different framerate and spatial resolution than rasterization. TSS has many potential applications, including streaming shading across networks, and reducing rendering cost via shading reuse across consecutive frames and/or shading at reduced resolutions relative to display resolution. Real-time TSS shading requires texture atlases small enough to be easily stored in GPU memory. Using static atlases leads to significant space wastage, motivating real-time per-frame atlassing strategies that pack only the content visible in each frame. We propose FastAtlas, a novel atlasing method that runs entirely on the GPU and is fast enough to be performed at interactive rates per-frame. Our method combines new per-frame chart computation and parametrization strategies and an efficient general chart packing algorithm. Our chartification strategy removes visible seams in output renders, and our parameterization ensures a constant texel-to-pixel ratio, avoiding undesirable undersampling artifacts. Our packing method is more general, and produces more tightly packed atlases, than previous work. Jointly, these innovations enable us to produce shading outputs of significantly higher visual quality than those produced using alternative atlasing strategies. We validate FastAtlas by shading and rendering challenging scenes using different atlasing settings, reflecting the needs of different TSS applications (temporal reuse, streaming, reduced or elevated shading rates). We extensively compare FastAtlas to prior alternatives and demonstrate that it achieves better shading quality and reduces texture stretch compared to prior approaches using the same settings.
  • Item
    Lactea: Web-Based Spectrum-Preserving Multi-Resolution Visualization of the GAIA Star Catalog
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Alghamdi, Reem; Hadwiger, Markus; Reina, Guido; Jaspe-Villanueva, Alberto; Aigner, Wolfgang; Andrienko, Natalia; Wang, Bei
    The explosion of data in astronomy has resulted in an era of unprecedented opportunities for discovery. The GAIA mission's catalog, containing a large number of light sources (mostly stars) with several parameters such as sky position and proper motion, is playing a significant role in advancing astronomy research and has been crucial in various scientific breakthroughs over the past decade. In its current release, more than 200 million stars contain a calibrated continuous spectrum, which is essential for characterizing astronomical information such as effective temperature and surface gravity, and enabling complex tasks like interstellar extinction detection and narrow-band filtering. Even though numerous studies have been conducted to visualize and analyze the data in the SciVis and AstroVis communities, no work has attempted to leverage spectral information for visualization in real-time. Interactive exploration of such complex, massive data presents several challenges for visualization. This paper introduces a novel multi-resolution, spectrum-preserving data structure and a progressive, real-time visualization algorithm to handle the sheer volume of the data efficiently, enabling interactive visualization and exploration of the whole catalog's spectra. We show the efficiency of our method with our open-source, interactive, web-based tool for exploring the GAIA catalog, and discuss astronomically relevant use cases of our system.
  • Item
    Real-Time Rendering Framework for Holography
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Fricke, Sascha; Castillo, Susana; Eisemann, Martin; Magnor, Marcus; Bousseau, Adrien; Day, Angela
    With the advent of holographic near-eye displays, the need for rendering algorithms that output holograms instead of color images emerged. These holograms usually encode phase maps that alter the phase of coherent light sources such that images result from diffraction effects. While common approaches rely on translating the output of traditional rendering systems to holograms in a post processing step, we instead developed a rendering system that can directly output a phase map to a Spatial Light Modulator (SLM). Our hardware-ray-traced sparse point distribution, and depth mapping enable rapid hologram generation, allowing for highquality time-multiplexed holography for real-time content. Additionally, our system is compatible with foveated rendering which enables further performance optimizations.
  • Item
    Learning Fast 3D Gaussian Splatting Rendering using Continuous Level of Detail
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Milef, Nicholas; Seyb, Dario; Keeler, Todd; Nguyen-Phuoc, Thu; Bozic, Aljaz; Kondguli, Sushant; Marshall, Carl; Bousseau, Adrien; Day, Angela
    3D Gaussian splatting (3DGS) has shown potential for rendering photorealistic 3D scenes in real-time. Unfortunately, rendering these scenes on less powerful hardware is still a challenge, especially with high-resolution displays. We introduce a continuous level of detail (CLOD) algorithm and demonstrate how our method can improve performance while preserving as much quality as possible. Our approach learns to order splats based on importance and optimize them such that a representative and realistic scene can be rendered for an arbitrary splat count. Our method does not require any additional memory or rendering overhead and works with existing 3DGS renderers. We also demonstrate the flexibility of our CLOD method by extending it with distance-based LOD selection, foveated rendering, and budget-based rendering.
  • Item
    Random Access Segmentation Volume Compression for Interactive Volume Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Piochowiak, Max; Kurpicz, Florian; Dachsbacher, Carsten; Aigner, Wolfgang; Andrienko, Natalia; Wang, Bei
    Segmentation volumes are voxel data sets often used in machine learning, connectomics, and natural sciences. Their large sizes make compression indispensable for storage and processing, including GPU video memory constrained real-time visualization. Fast Compressed Segmentation Volumes (CSGV) [PD24] provide strong brick-wise compression and random access at the brick level. Voxels within a brick, however, have to be decoded serially and thus rendering requires caching of visible full bricks, consuming extra memory. Without caching, accessing voxels can have a worst-case decoding overhead of up to a full brick (typically over 32.000 voxels). We present CSGV-R which provide true multi-resolution random access on a per-voxel level. We leverage Huffman-shaped Wavelet Trees for random accesses to variable bit-length encoding and their rank operation to query label palette offsets in bricks. Our real-time segmentation volume visualization removes decoding artifacts from CSGV and renders CSGV-R volumes without caching bricks at faster render times. CSGV-R has slightly lower compression rates than CSGV, but outperforms Neuroglancer, the state-of-the-art compression technique with true random access, with 2× to 4× smaller data sets at rates between 0.648% and 4.411% of the original volume sizes.
  • Item
    Spherical Harmonic Exponentials for Efficient Glossy Reflections
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Silvennoinen, Ari; Sloan, Peter-Pike; Iwanicki, Michal; Nowrouzezahrai, Derek; Knoll, Aaron; Peters, Christoph
    We propose a high-performance and compact method for computing glossy specular reflections. Commonly-used prefiltered environment maps have large storage requirements and high error due to constrained treatment of view-dependence. We propose a factorized spherical harmonic exponential representation that exploits new observations of the benefits of log-space reconstruction for reflectance. Our method is compact, properly accounts for view-dependent reflections, and is more accurate than the state-of-the-industry solutions. We achieve higher quality results with an order of magnitude less memory, all with efficient and alias-free reconstruction of glossy reflections from environment lights and continuously-varying material roughness.
  • Item
    Real-time Level-of-detail Strand-based Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Huang, Tao; Zhou, Yang; Lin, Daqi; Zhu, Junqiu; Yan, Ling-Qi; Wu, Kui; Wang, Beibei; Wilkie, Alexander
    We present a real-time strand-based rendering framework that ensures seamless transitions between different level-of-detail (LoD) while maintaining a consistent appearance. We first introduce an aggregated BCSDF model to accurately capture both single and multiple scattering within the cluster for hairs and fibers. Building upon this, we further introduce a LoD framework for hair rendering that dynamically, adaptively, and independently replaces clusters of individual hairs with thick strands based on their projected screen widths. Through tests on diverse hairstyles with various hair colors and animation, as well as knit patches, our framework closely replicates the appearance of multiple-scattered full geometries at various viewing distances, achieving up to a 13× speedup.