Search Results

Now showing 1 - 8 of 8
  • Item
    Perception-driven Accelerated Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Weier, Martin; Stengel, Michael; Roth, Thorsten; Didyk, Piotr; Eisemann, Elmar; Eisemann, Martin; Grogorick, Steve; Hinkenjann, André; Kruijff, Ernst; Magnor, Marcus; Myszkowski, Karol; Slusallek, Philipp; Victor Ostromoukov and Matthias Zwicker
    Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real-time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi-view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.
  • Item
    Game-based Transformations: A Playful Approach to Learning Transformations in Computer Graphics
    (The Eurographics Association, 2023) Eisemann, Martin; Magana, Alejandra; Zara, Jiri
    In this paper, we present a playful and game-based learning approach to teaching transformations in a second-year undergraduate computer graphics course. While the theoretical concepts were taught in class, the exercise consists of two web-based tools that help the students to get a playful grasp on the complex topic, which is the foundation for many of the later concepts typically taught in computer graphics, such as the rendering pipeline, animation, camera motion, shadow mapping and many more. The final students' projects and feedback indicate that the game-based introduction was well-received by the students.
  • Item
    General and Robust Error Estimation and Reconstruction for Monte Carlo Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2015) Bauszat, Pablo; Eisemann, Martin; Eisemann, Elmar; Magnor, Marcus; Olga Sorkine-Hornung and Michael Wimmer
    Adaptive filtering techniques have proven successful in handling non-uniform noise in Monte-Carlo rendering approaches. A recent trend is to choose an optimal filter per pixel from a selection of non spatially-varying filters. Nonetheless, the best filter choice is difficult to predict in the absence of a reference rendering. Our approach relies on the observation that the reconstruction error is locally smooth for a given filter. Hence, we propose to construct a dense error prediction from a small set of sparse but robust estimates. The filter selection is then formulated as a non-local optimization problem, which we solve via graph cuts, to avoid visual artifacts due to inconsistent filter choices. Our approach does not impose any restrictions on the used filters, outperforms previous state-of-the-art techniques and provides an extensible framework for future reconstruction techniques.
  • Item
    The Split Grid - A Hierarchical 1D-Grid-based Acceleration Data Structure for Ray Tracing
    (The Eurographics Association, 2014) Bauszat, Pablo; Kastner, Marc Aurel; Eisemann, Martin; Magnor, Marcus; Mathias Paulin and Carsten Dachsbacher
    We present a new acceleration structure for ray tracing called the Split Grid. Combining concepts of hierarchical grids, kd-trees and Bounding Volume Hierarchies (BVHs), our approach is based on the idea of nesting 1D-grids. Our proposed acceleration structure is compact in storage, adaptive to the scene geometry and can be traversed using a fast and efficient traversal scheme. We show that the Split Grid is comparable to other current state-of-theart acceleration structures regarding traversal performance and memory footprint. While other data structures usually achieve these levels of performance only due to a complex and expensive construction process (e.g. using the Surface Area Heuristic (SAH) [MB90]), our proposed Split Grid is built with a very simplistic construction scheme which is a major benefit of our approach.
  • Item
    D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Kappel, Moritz; Hahlbohm, Florian; Scholz, Timon; Castillo, Susana; Theobalt, Christian; Eisemann, Martin; Golyanik, Vladislav; Magnor, Marcus; Bousseau, Adrien; Day, Angela
    Dynamic reconstruction and spatiotemporal novel-view synthesis of non-rigidly deforming scenes recently gained increased attention. While existing work achieves impressive quality and performance on multi-view or teleporting camera setups, most methods fail to efficiently and faithfully recover motion and appearance from casual monocular captures. This paper contributes to the field by introducing a new method for dynamic novel view synthesis from monocular video, such as casual smartphone captures. Our approach represents the scene as a dynamic neural point cloud, an implicit time-conditioned point distribution that encodes local geometry and appearance in separate hash-encoded neural feature grids for static and dynamic regions. By sampling a discrete point cloud from our model, we can efficiently render high-quality novel views using a fast differentiable rasterizer and neural rendering network. Similar to recent work, we leverage advances in neural scene analysis by incorporating data-driven priors like monocular depth estimation and object segmentation to resolve motion and depth ambiguities originating from the monocular captures. In addition to guiding the optimization process, we show that these priors can be exploited to explicitly initialize our scene representation to drastically improve optimization speed and final image quality. As evidenced by our experimental evaluation, our dynamic point cloud model not only enables fast optimization and real-time frame rates for interactive applications, but also achieves competitive image quality on monocular benchmark sequences. Our code and data are available online https://moritzkappel.github.io/projects/dnpc/.
  • Item
    Axis-Normalized Ray-Box Intersection
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Friederichs, Fabian; Benthin, Carsten; Grogorick, Steve; Eisemann, Elmar; Magnor, Marcus; Eisemann, Martin; Bousseau, Adrien; Day, Angela
    Ray-axis aligned bounding box intersection tests play a crucial role in the runtime performance of many rendering applications, driven not by complexity but mainly by the volume of tests required. While existing solutions were believed to be pretty much optimal in terms of runtime on current hardware, our paper introduces a new intersection test requiring fewer arithmetic operations compared to all previous methods. By transforming the ray we eliminate the need for one third of the traditional bounding-slab tests and achieve a speed enhancement of approximately 13.8% or 10.9%, depending on the compiler.We present detailed runtime analyses in various scenarios.
  • Item
    Efficient Perspective-Correct 3D Gaussian Splatting Using Hybrid Transparency
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Hahlbohm, Florian; Friederichs, Fabian; Weyrich, Tim; Franke, Linus; Kappel, Moritz; Castillo, Susana; Stamminger, Marc; Eisemann, Martin; Magnor, Marcus; Bousseau, Adrien; Day, Angela
    3D Gaussian Splats (3DGS) have proven a versatile rendering primitive, both for inverse rendering as well as real-time exploration of scenes. In these applications, coherence across camera frames and multiple views is crucial, be it for robust convergence of a scene reconstruction or for artifact-free fly-throughs. Recent work started mitigating artifacts that break multi-view coherence, including popping artifacts due to inconsistent transparency sorting and perspective-correct outlines of (2D) splats. At the same time, real-time requirements forced such implementations to accept compromises in how transparency of large assemblies of 3D Gaussians is resolved, in turn breaking coherence in other ways. In our work, we aim at achieving maximum coherence, by rendering fully perspective-correct 3D Gaussians while using a high-quality approximation of accurate blending, hybrid transparency, on a per-pixel level, in order to retain real-time frame rates. Our fast and perspectively accurate approach for evaluation of 3D Gaussians does not require matrix inversions, thereby ensuring numerical stability and eliminating the need for special handling of degenerate splats, and the hybrid transparency formulation for blending maintains similar quality as fully resolved per-pixel transparencies at a fraction of the rendering costs. We further show that each of these two components can be independently integrated into Gaussian splatting systems. In combination, they achieve up to 2× higher frame rates, 2× faster optimization, and equal or better image quality with fewer rendering artifacts compared to traditional 3DGS on common benchmarks.
  • Item
    Real-Time Rendering Framework for Holography
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Fricke, Sascha; Castillo, Susana; Eisemann, Martin; Magnor, Marcus; Bousseau, Adrien; Day, Angela
    With the advent of holographic near-eye displays, the need for rendering algorithms that output holograms instead of color images emerged. These holograms usually encode phase maps that alter the phase of coherent light sources such that images result from diffraction effects. While common approaches rely on translating the output of traditional rendering systems to holograms in a post processing step, we instead developed a rendering system that can directly output a phase map to a Spatial Light Modulator (SLM). Our hardware-ray-traced sparse point distribution, and depth mapping enable rapid hologram generation, allowing for highquality time-multiplexed holography for real-time content. Additionally, our system is compatible with foveated rendering which enables further performance optimizations.