High-Performance Graphics 2025 - Symposium Papers
Permanent URI for this collection
Browse
Browsing High-Performance Graphics 2025 - Symposium Papers by Title
Now showing 1 - 10 of 10
Results Per Page
Sort Options
Item Collaborative Texture Filtering(The Eurographics Association, 2025) Akenine-Möller, Tomas; Ebelin, Pontus; Pharr, Matt; Wronski, Bartlomiej; Knoll, Aaron; Peters, ChristophRecent advances in texture compression provide major improvements in compression ratios, but cannot use the GPU's texture units for decompression and filtering. This has led to the development of stochastic texture filtering (STF) techniques to avoid the high cost of multiple texel evaluations with such formats. Unfortunately, those methods can give undesirable visual appearance changes under magnification and may contain visible noise and flicker despite the use of spatiotemporal denoisers. Recent work substantially improves the quality of magnification filtering with STF by sharing decoded texel values between nearby pixels [WPAM25]. Using GPU wave communication intrinsics, this sharing can be performed inside actively executing shaders without memory traffic overhead. We take this idea further and present novel algorithms that use wave communication between lanes to avoid repeated texel decompression prior to filtering. By distributing unique work across lanes, we can achieve zeroerror filtering using ≤ 1 texel evaluations per pixel given a sufficiently large magnification factor. For the remaining cases, we propose novel filtering fallback methods that also achieve higher quality than prior approaches.Item Fast Planetary Shadows using Fourier-Compressed Horizon Maps(The Eurographics Association, 2025) Fritsch, Jonathan; Schneegans, Simon; Friederichs, Fabian; Flatken, Markus; Eisemann, Martin; Gerndt, Andreas; Knoll, Aaron; Peters, ChristophShadows on large-scale terrains are important for many applications, including video games and scientific visualization. Yet real-time rendering of realistic soft shadows at planetary scale is a challenging task. Notably, many shadowing algorithms require keeping significant amounts of extra terrain geometry in memory to account for out-of-frustum occluders. We present Fourier-Compressed Horizon Mapping, an enhancement of the horizon mapping algorithm which is able to circumvent this requirement and render shadows in a single render pass. For a given digital elevation model, we create a compact representation of each pixel's horizon profile and use it to render soft shadows at runtime. This representation is based on a truncated Fourier series stored in a multi-resolution texture pyramid and can be encoded in a single four-channel 32 bit floating point texture. This makes this approach especially suitable for applications using a level-of-detail system for terrain rendering. By using a compact representation in frequency space, compressed horizon mapping consistently creates more accurate shadows compared to traditional horizon maps of the same memory footprint, while still running well at real-time frame rates.Item GATE: Geometry-Aware Trained Encoding(The Eurographics Association, 2025) Boksansky, Jakub; Meister, Daniel; Benthin, Carsten; Knoll, Aaron; Peters, ChristophThe encoding of input parameters is one of the fundamental building blocks of neural network algorithms. Its goal is to map the input data to a higher-dimensional space [RBA*19], typically supported by trained feature vectors [MESK22]. The mapping is crucial for the efficiency and approximation quality of neural networks. We propose a novel geometry-aware encoding called GATE that stores feature vectors on the surface of triangular meshes. Our encoding is suitable for neural rendering-related algorithms, for example, neural radiance caching [MRNK21]. It also avoids limitations of previous hash-based encoding schemes, such as hash collisions, selection of resolution versus scene size, and divergent memory access. Our approach decouples feature vector density from geometry density using mesh colors [YKH10], while allowing for finer control over neural network training and adaptive level-of-detail.Item Hardware Accelerated Neural Block Texture Compression with Cooperative Vectors(The Eurographics Association, 2025) Belcour, Laurent; Benyoub, Anis; Knoll, Aaron; Peters, ChristophIn this work, we present an extension to the neural texture compression method of Weinreich and colleagues [WDOHN24]. Like them, we leverage existing block compression methods which permit to use hardware texture filtering to store a neural representation of physically-based rendering (PBR) texture sets (including albedo, normal maps, roughness, etc.). However, we show that low dynamic range block compression formats still make the solution viable. Thanks to this, we show that we can achieve higher compression ratio or higher quality at fixed compression ratio. We improve performance at runtime using a tile based rendering architecture that leverage hardware matrix multiplication engine. Thanks to all this, we render 4k textures sets (9 channels per asset) with anisotropic filtering at 1080p using only 28MB of VRAM per texture set at 0.55ms on an Intel B580.Item High-Performance Graphics 2025 - Symposium Papers: Frontmatter(The Eurographics Association, 2025) Knoll, Aaron; Peters, Christoph; Knoll, Aaron; Peters, ChristophItem Interactive Stroke-based Neural SDF Sculpting(The Eurographics Association, 2025) Rubab, Fizza; Tong, Yiying; Knoll, Aaron; Peters, ChristophRecent advances in implicit neural representations have made them a popular choice for modeling 3D geometry. However, directly editing these representations presents challenges due to the complex relationship between model weights and surface geometry, as well as the slow optimization required to update neural fields. Among various editing tools, sculpting stands out as a valuable operation for the graphics and modeling community. While traditional mesh-based tools like ZBrush enable intuitive edits, a comparable high-performance toolkit for sculpting neural SDFs is currently lacking. We introduce a framework that enables interactive surface sculpting directly on neural implicit representations with optimized performance. Unlike previous methods, which are limited to spot edits, our approach allows users to perform stroke-based modifications on the fly, ensuring intuitive shape manipulation without switching representations. By employing tubular neighborhoods to sample strokes and customizable brush profiles, we achieve smooth deformations along user-defined curves, providing intuitive control over the sculpting process. Our method demonstrates that versatile edits can be achieved while preserving the smooth nature of implicit representations, all without compromising interactive performance.Item LidarScout: Direct Out-of-Core Rendering of Massive Point Clouds(The Eurographics Association, 2025) Erler, Philipp; Herzberger, Lukas; Wimmer, Michael; Schütz, Markus; Knoll, Aaron; Peters, ChristophLarge-scale terrain scans are the basis for many important tasks, such as topographic mapping, forestry, agriculture, and infrastructure planning. The resulting point cloud data sets are so massive in size that even basic tasks like viewing take hours to days of pre-processing in order to create level-of-detail structures that allow inspecting the data set in their entirety in real time. In this paper, we propose a method that is capable of instantly visualizing massive country-sized scans with hundreds of billions of points. Upon opening the data set, we first load a sparse subsample of points and initialize an overview of the entire point cloud, immediately followed by a surface reconstruction process to generate higher-quality, hole-free heightmaps. As users start navigating towards a region of interest, we continue to prioritize the heightmap construction process to the user's viewpoint. Once a user zooms in closely, we load the full-resolution point cloud data for that region and update the corresponding height map textures with the full-resolution data. As users navigate elsewhere, full-resolution point data that is no longer needed is unloaded, but the updated heightmap textures are retained as a form of medium level of detail. Overall, our method constitutes a form of direct out-of-core rendering for massive point cloud data sets (terabytes, compressed) that requires no preprocessing and no additional disk space. Source code, executable, pre-trained model, and dataset are available at: https://github.com/cg-tuwien/lidarscoutItem No More Shading Languages: Compiling C++ to Vulkan Shaders(The Eurographics Association, 2025) Devillers, Hugo; Kurtenacker, Matthias; Membarth, Richard; Lemme, Stefan; Kenzel, Michael; Yazici, Ömercan; Slusallek, Philipp; Knoll, Aaron; Peters, ChristophGraphics APIs have traditionally relied on shading languages, however, these languages have a number of fundamental defects and limitations. By contrast, GPU compute platforms offer powerful, feature-rich languages suitable for heterogeneous compute. We propose reframing shading languages as embedded domain-specific languages, layered on top of a more general language like C++, doing away with traditional limitations on pointers, functions, and recursion, to the benefit of programmability. This represents a significant compilation challenge because the limitations of shaders are reflected in their lower-level representations. We present the Vcc compiler, which allows conventional C and C++ code to run as Vulkan shaders. Our compiler is complemented by a simple shading library and exposes GPU particulars as intrinsics and annotations. We evaluate the performance of our compiler using a selection of benchmarks, including a real-time path tracer, achieving competitive performance compared to their native CUDA counterparts.Item Real-Time GPU Tree Generation(The Eurographics Association, 2025) Kuth, Bastian; Oberberger, Max; Faber, Carsten; Pfeifer, Pirmin; Tabaei, Seyedmasih; Baumeister, Dominik; Meyer, Quirin; Knoll, Aaron; Peters, ChristophTrees for real-time media are typically created using procedural algorithms and then baked to a polygon format, requiring large amounts of memory. We propose a novel procedural system and model for generating and rendering realistic trees and similar vegetation specifically tailored to run in real-time on GPUs. By using GPU work graphs with mesh nodes, we render gigabytes-worth of tree geometry from kilobytes of generation code every frame exclusively on the GPU. Contrary to prior work, our method combines instant in-engine artist authoring, continuous frame-specific level of detail and tessellation, highly detailed animation, and seasonal details like blossoms, fruits, and snow. Generating the unique tree geometries of our teaser test scene and rendering them to the G-buffer takes 3.13 ms on an AMD Radeon RX 7900 XTX.Item Real-time Rendering of Animated Meshless Representation(The Eurographics Association, 2025) Luton, Pacôme; Tricard, Thibault; Knoll, Aaron; Peters, ChristophMeshless representations, such as implicit representations (Signed Distance Fields, procedural density fields, etc.) and 3D textures, are important representations in Computer Graphics. Implicit representation allows the representation of geometry with an infinite resolution and a low memory cost. 3D textures are an explicit representation that can store shape information in a regular 3D grid of voxels, allowing for simple anti-aliasing, mipmapping, and dynamic editing. Recent works have improved both representations' rendering performances, making them viable for real-time rendering. However, their animation remains a tedious task, limiting their adoption. In this work, we propose a data structure and a rendering pipeline that allows for animating meshless geometric representations. To achieve that, we encase the meshless representations into a coarse tetrahedral mesh, rigged as we would have for a typical articulated character. At rendering time, we apply the deformation of the rest pose to the full volume using interval shading [Tri24]. Our method can be directly integrated into a classical rasterization-based rendering pipeline, allowing for the real-time animation of meshless representations using pre-existing animation software.