EG 2026 - Posters

Permanent URI for this collection

Posters
Video Rater: A Framework for Subjective Evaluation of Rendering Artifacts
Cyganik Karol, Brzezinski Dariusz
Kollani: A Distributed Tool for Real-Time Collaborative Reviews of 3D Assets
Andreussi Francesco, Hickstein Claudio, Minsel Martin
Neural Approximation of Generalized Voronoi Diagrams
Rigas Panagiotis, Ioannakis George, Emiris Ioannis
Dynamic Region Filling for Robotic Artistic Painting using Visual Feedback
Stroh Michael, Berio Daniel, Fol Leymarie Frederic, Deussen Oliver
Hybrid Contrast-Aware Fog Detection for Automotive Vision Systems
Procházková Jana, Mikuláček Pavel, Štarha Pavel
SemanticWeaponry: A Modular Approach to Text-to-3D Model Generation
Lower Thomas, Anderson Eike Falk
Automating Makeup Appearance Acquisition via Inverse Rendering for Virtual Try-On
Li Tao, Tran Quoc Nam Loïc, Bokaris Panagiotis-Alexandros
MBRCNet: Multi-view Breast Reconstruction and Classification Network
Pang Yan, Quiñones Rubi
Deep Illumination–Guided Light Probe Placement
Tarasidis Andreas, Vasilakis Andreas-Alexandros, Fudos Ioannis
Compressing Double-Phase Holograms using 2D Gaussians
Fan Xiaoyue, Zhan Yicheng, Mazumdar Amrita, Akşit Kaan
Real-Time Angular Color Shift Compensation for On-Set Virtual Production
Beck Christopher, Schattkowsky Tim, Albertz Stefan
Still2Scene: Hybrid Gaussian Environments for Virtual Production
Sun Xiaohan, O'Sullivan Carol
Semi-Automatic View-Based Segmentation of Gaussian Splat Scenes
Bisgaard Mathias, Møller Frederik, Nielsen Jonas Moody, Mørch Katrine, Baran Samuel, Gaarsdal Jesper, Nikolov Ivan, Madsen Claus
Decoupled Reprojection Consistency for Diagnosing 3D Gaussian Splatting Failures
Park Jin-Hyeong
Opacity-Based Occlusion Culling for 3D Gaussian Splatting
Giannone Matteo, Ibrahim Mohamed, Liu Yang
Smaller and Faster 3DGS via Post-Training Dictionary Learning
Gong Jiarong, Unger Jonas, Miandji Ehsan

BibTeX (EG 2026 - Posters)
@inproceedings{
10.2312:egp.20261000,
booktitle = {
Eurographics 2025 - Posters},
editor = {
Gerrits, Tim
and
Teschner, Matthias
}, title = {{
Video Rater: A Framework for Subjective Evaluation of Rendering Artifacts}},
author = {
Cyganik, Karol
and
Brzezinski, Dariusz
}, year = {
2026},
publisher = {
The Eurographics Association
},
ISSN = {1017-4656},
ISBN = {978-3-03868-300-1},
DOI = {
10.2312/egp.20261000}
}
@inproceedings{
10.2312:egp.20261001,
booktitle = {
Eurographics 2025 - Posters},
editor = {
Gerrits, Tim
and
Teschner, Matthias
}, title = {{
Kollani: A Distributed Tool for Real-Time Collaborative Reviews of 3D Assets}},
author = {
Andreussi, Francesco
and
Hickstein, Claudio
and
Minsel, Martin
}, year = {
2026},
publisher = {
The Eurographics Association
},
ISSN = {1017-4656},
ISBN = {978-3-03868-300-1},
DOI = {
10.2312/egp.20261001}
}
@inproceedings{
10.2312:egp.20261003,
booktitle = {
Eurographics 2025 - Posters},
editor = {
Gerrits, Tim
and
Teschner, Matthias
}, title = {{
Neural Approximation of Generalized Voronoi Diagrams}},
author = {
Rigas, Panagiotis
and
Ioannakis, George
and
Emiris, Ioannis
}, year = {
2026},
publisher = {
The Eurographics Association
},
ISSN = {1017-4656},
ISBN = {978-3-03868-300-1},
DOI = {
10.2312/egp.20261003}
}
@inproceedings{
10.2312:egp.20261004,
booktitle = {
Eurographics 2026 - Posters},
editor = {
Gerrits, Tim
and
Teschner, Matthias
}, title = {{
Dynamic Region Filling for Robotic Artistic Painting using Visual Feedback}},
author = {
Stroh, Michael
and
Berio, Daniel
and
Fol Leymarie, Frederic
and
Deussen, Oliver
}, year = {
2026},
publisher = {
The Eurographics Association
},
ISSN = {1017-4656},
ISBN = {978-3-03868-300-1},
DOI = {
10.2312/egp.20261004}
}
@inproceedings{
10.2312:egp.20261005,
booktitle = {
Eurographics 2026 - Posters},
editor = {
Gerrits, Tim
and
Teschner, Matthias
}, title = {{
Hybrid Contrast-Aware Fog Detection for Automotive Vision Systems}},
author = {
Procházková, Jana
and
Mikuláček, Pavel
and
Štarha, Pavel
}, year = {
2026},
publisher = {
The Eurographics Association
},
ISSN = {1017-4656},
ISBN = {978-3-03868-300-1},
DOI = {
10.2312/egp.20261005}
}
@inproceedings{
10.2312:egp.20261006,
booktitle = {
Eurographics 2026 - Posters},
editor = {
Gerrits, Tim
and
Teschner, Matthias
}, title = {{
SemanticWeaponry: A Modular Approach to Text-to-3D Model Generation}},
author = {
Lower, Thomas
and
Anderson, Eike Falk
}, year = {
2026},
publisher = {
The Eurographics Association
},
ISSN = {1017-4656},
ISBN = {978-3-03868-300-1},
DOI = {
10.2312/egp.20261006}
}
@inproceedings{
10.2312:egp.20261007,
booktitle = {
Eurographics 2026 - Posters},
editor = {
Gerrits, Tim
and
Teschner, Matthias
}, title = {{
Automating Makeup Appearance Acquisition via Inverse Rendering for Virtual Try-On}},
author = {
Li, Tao
and
Tran, Quoc Nam Loïc
and
Bokaris, Panagiotis-Alexandros
}, year = {
2026},
publisher = {
The Eurographics Association
},
ISSN = {1017-4656},
ISBN = {978-3-03868-300-1},
DOI = {
10.2312/egp.20261007}
}
@inproceedings{
10.2312:egp.20261008,
booktitle = {
Eurographics 2026 - Posters},
editor = {
Gerrits, Tim
and
Teschner, Matthias
}, title = {{
MBRCNet: Multi-view Breast Reconstruction and Classification Network}},
author = {
Pang, Yan
and
Quiñones, Rubi
}, year = {
2026},
publisher = {
The Eurographics Association
},
ISSN = {1017-4656},
ISBN = {978-3-03868-300-1},
DOI = {
10.2312/egp.20261008}
}
@inproceedings{
10.2312:egp.20261009,
booktitle = {
Eurographics 2026 - Posters},
editor = {
Gerrits, Tim
and
Teschner, Matthias
}, title = {{
Deep Illumination–Guided Light Probe Placement}},
author = {
Tarasidis, Andreas
and
Vasilakis, Andreas-Alexandros
and
Fudos, Ioannis
}, year = {
2026},
publisher = {
The Eurographics Association
},
ISSN = {1017-4656},
ISBN = {978-3-03868-300-1},
DOI = {
10.2312/egp.20261009}
}
@inproceedings{
10.2312:egp.20261010,
booktitle = {
Eurographics 2026 - Posters},
editor = {
Gerrits, Tim
and
Teschner, Matthias
}, title = {{
Compressing Double-Phase Holograms using 2D Gaussians}},
author = {
Fan, Xiaoyue
and
Zhan, Yicheng
and
Mazumdar, Amrita
and
Akşit, Kaan
}, year = {
2026},
publisher = {
The Eurographics Association
},
ISSN = {1017-4656},
ISBN = {978-3-03868-300-1},
DOI = {
10.2312/egp.20261010}
}
@inproceedings{
10.2312:egp.20261011,
booktitle = {
Eurographics 2026 - Posters},
editor = {
Gerrits, Tim
and
Teschner, Matthias
}, title = {{
Real-Time Angular Color Shift Compensation for On-Set Virtual Production}},
author = {
Beck, Christopher
and
Schattkowsky, Tim
and
Albertz, Stefan
}, year = {
2026},
publisher = {
The Eurographics Association
},
ISSN = {1017-4656},
ISBN = {978-3-03868-300-1},
DOI = {
10.2312/egp.20261011}
}
@inproceedings{
10.2312:egp.20261012,
booktitle = {
Eurographics 2026 - Posters},
editor = {
Gerrits, Tim
and
Teschner, Matthias
}, title = {{
Still2Scene: Hybrid Gaussian Environments for Virtual Production}},
author = {
Sun, Xiaohan
and
O'Sullivan, Carol
}, year = {
2026},
publisher = {
The Eurographics Association
},
ISSN = {1017-4656},
ISBN = {978-3-03868-300-1},
DOI = {
10.2312/egp.20261012}
}
@inproceedings{
10.2312:egp.20261013,
booktitle = {
Eurographics 2026 - Posters},
editor = {
Gerrits, Tim
and
Teschner, Matthias
}, title = {{
Semi-Automatic View-Based Segmentation of Gaussian Splat Scenes}},
author = {
Bisgaard, Mathias
and
Møller, Frederik
and
Nielsen, Jonas Moody
and
Mørch, Katrine
and
Baran, Samuel
and
Gaarsdal, Jesper
and
Nikolov, Ivan
and
Madsen, Claus
}, year = {
2026},
publisher = {
The Eurographics Association
},
ISSN = {1017-4656},
ISBN = {978-3-03868-300-1},
DOI = {
10.2312/egp.20261013}
}
@inproceedings{
10.2312:egp.20261014,
booktitle = {
Eurographics 2026 - Posters},
editor = {
Gerrits, Tim
and
Teschner, Matthias
}, title = {{
Decoupled Reprojection Consistency for Diagnosing 3D Gaussian Splatting Failures}},
author = {
Park, Jin-Hyeong
}, year = {
2026},
publisher = {
The Eurographics Association
},
ISSN = {1017-4656},
ISBN = {978-3-03868-300-1},
DOI = {
10.2312/egp.20261014}
}
@inproceedings{
10.2312:egp.20261015,
booktitle = {
Eurographics 2026 - Posters},
editor = {
Gerrits, Tim
and
Teschner, Matthias
}, title = {{
Opacity-Based Occlusion Culling for 3D Gaussian Splatting}},
author = {
Giannone, Matteo
and
Ibrahim, Mohamed
and
Liu, Yang
}, year = {
2026},
publisher = {
The Eurographics Association
},
ISSN = {1017-4656},
ISBN = {978-3-03868-300-1},
DOI = {
10.2312/egp.20261015}
}
@inproceedings{
10.2312:egp.20261016,
booktitle = {
Eurographics 2026 - Posters},
editor = {
Gerrits, Tim
and
Teschner, Matthias
}, title = {{
Smaller and Faster 3DGS via Post-Training Dictionary Learning}},
author = {
Gong, Jiarong
and
Unger, Jonas
and
Miandji, Ehsan
}, year = {
2026},
publisher = {
The Eurographics Association
},
ISSN = {1017-4656},
ISBN = {978-3-03868-300-1},
DOI = {
10.2312/egp.20261016}
}
@inproceedings{
10.2312:egp.20262000,
booktitle = {
Eurographics 2026 - Posters},
editor = {
Gerrits, Tim
and
Teschner, Matthias
}, title = {{
EUROGRAPHICS 2026: Posters Frontmatter}},
author = {
Gerrits, Tim
and
Teschner, Matthias
}, year = {
2026},
publisher = {
The Eurographics Association
},
ISSN = {1017-4656},
ISBN = {978-3-03868-300-1},
DOI = {
10.2312/egp.20262000}
}

Browse

Recent Submissions

Now showing 1 - 17 of 17
  • Item
    Video Rater: A Framework for Subjective Evaluation of Rendering Artifacts
    (The Eurographics Association, 2026) Cyganik, Karol; Brzezinski, Dariusz; Gerrits, Tim; Teschner, Matthias
    Temporal reconstruction techniques used in real-time rendering, including temporal anti-aliasing and neural upscaling methods, improve visual quality but introduce characteristic spatial and temporal artifacts. Existing image and video quality metrics are largely optimized for compression artifacts and fail to capture these degradations. To enable the development of data-driven metrics for rendering artifacts, we present a video assessment framework aligned with ITU-R BT.500 recommendations. The proposed framework is accompanied by a customizable open-source Video Rater web application (https://github.com/kordc/Video-rater) designed to gather subjective ratings. We demonstrate the framework’s and tool’s usability through a use-case survey of over 200 videos with various rendering artifacts (https://video-rater.cs.put.poznan.pl/).
  • Item
    Kollani: A Distributed Tool for Real-Time Collaborative Reviews of 3D Assets
    (The Eurographics Association, 2026) Andreussi, Francesco ; Hickstein, Claudio; Minsel, Martin; Gerrits, Tim; Teschner, Matthias
    We present a novel web-based platform for reviewing 3D assets in a collaborative, interactive, and unified way. Currently, these reviews have to be carried out across several different platforms, while the application we developed integrates multiple tools (i.e. an interactive 3D environment, a video-conferencing feed, and a chat) in one coherent interface that can also be connected to popular DCCs, like Maya, in order to create a quick, accessible, and expressive way to discuss and review 3D models of any nature for any user, regardless of their own technical background and level of familiarity with modelling software.
  • Item
    Neural Approximation of Generalized Voronoi Diagrams
    (The Eurographics Association, 2026) Rigas, Panagiotis; Ioannakis, George ; Emiris, Ioannis ; Gerrits, Tim; Teschner, Matthias
    We introduce VoroFields, a hierarchical neural-field framework for approximating generalized Voronoi diagrams of finite geometric site sets in low-dimensional domains under arbitrary evaluable point-to-site distances. Instead of constructing the diagram combinatorially, VoroFields learns a continuous, differentiable surrogate whose maximizer structure induces the partition implicitly. The Voronoi cells correspond to maximizer regions of the field, with boundaries defined by equal responses between competing sites. A hierarchical decomposition reduces the combinatorial complexity by refining only near envelope transition strata. Experiments across site families and metrics demonstrate accurate recovery of cells and boundary geometry without shape-specific constructions.
  • Item
    Dynamic Region Filling for Robotic Artistic Painting using Visual Feedback
    (The Eurographics Association, 2026) Stroh, Michael ; Berio, Daniel ; Fol Leymarie, Frederic ; Deussen, Oliver ; Gerrits, Tim; Teschner, Matthias
    We present an iterative region-based stroke-filling framework for robotic painting that combines vectorized image abstraction with closed-loop physical feedback. Rather than executing a fixed stroke plan, the system incrementally generates adaptive 3D brush trajectories guided by geometric structure, coverage estimation, and physical canvas feedback. Each region is progressively filled using dynamically grown strokes whose direction, curvature, and width are optimized using distance transforms, structure tensor analysis, and local coverage maps. After each execution cycle, camera feedback is used to estimate real paint deposition and refine subsequent stroke generation. This closed-loop process continues until region-level coverage convergence is reached, enabling robust handling of physical uncertainties in paint transfer and brush dynamics. The approach produces a dynamic region-based robotic painting approach while maintaining high physical reliability.
  • Item
    Hybrid Contrast-Aware Fog Detection for Automotive Vision Systems
    (The Eurographics Association, 2026) Procházková, Jana ; Mikuláček, Pavel ; Štarha, Pavel ; Gerrits, Tim; Teschner, Matthias
    Modern vehicles are equipped with a wide range of Advanced Driver Assistance Systems (ADAS) that rely heavily on camera-based perception. Reliable visibility estimation – particularly under fog condition – remains a significant challenge. Accurate fog detection can enable proactive system responses, such as automatic activation of fog lights, and enhance operational safety. We present a contrast-aware anomaly detection framework for image-based fog detection. Our algorithm combines multi-scale Difference of Gaussians responses and Gaussian-weighted local Root Mean Squared contrast with a convolutional autoencoder. The model is trained exclusively on clear-weather imagery to learn the nominal scene distribution, and visibility degradation is detected as a reconstruction deviation from this learned representation. Evaluation on a separate test set containing clear and fog conditions demonstrates an AUC of 0.91, achieved without using fog samples during training. The framework provides a practical basis for camera-based visibility monitoring in automotive environments.
  • Item
    SemanticWeaponry: A Modular Approach to Text-to-3D Model Generation
    (The Eurographics Association, 2026) Lower, Thomas ; Anderson, Eike Falk; Gerrits, Tim; Teschner, Matthias
    We present a modular approach to AI-assisted Text-to-3D content generation that takes a semantic description of a 3D model, leveraging the semantic capabilities of Large Language Models to create a set of parameters which are then fed into an implicit surface function. The resulting geometry can be remeshed for use in 3D Digital Content Creation applications.
  • Item
    Automating Makeup Appearance Acquisition via Inverse Rendering for Virtual Try-On
    (The Eurographics Association, 2026) Li, Tao; Tran, Quoc Nam Loïc ; Bokaris, Panagiotis-Alexandros ; Gerrits, Tim; Teschner, Matthias
    Virtual Try-On (VTO) systems are essential for modern beauty e-commerce, yet creating physically accurate digital twins of makeup products remains a labor-intensive process requiring manual parameter tuning by 3D artists. We propose an automated pipeline to estimate rendering parameters for lipstick materials using inverse rendering. By leveraging controlled in-vitro imagery of lipsticks applied to contrast cards, we optimize the parameters of a principled BSDF within Mitsuba 3 [JSR∗22]. We introduce a multi-stage optimization strategy separating color, texture, and reflection, demonstrating high fidelity across five distinct commercial lipstick ranges. The pipeline can be generalized to other makeup materials.
  • Item
    MBRCNet: Multi-view Breast Reconstruction and Classification Network
    (The Eurographics Association, 2026) Pang, Yan ; Quiñones, Rubi; Gerrits, Tim; Teschner, Matthias
    High-fidelity 3D reconstruction of the human breast from multi-view RGB images remains challenging, particularly for low-texture anatomy under sparse-view constraints. Standard imaging methods such as Computed Tomography or Magnetic Resonance Imaging provide dense volumetric data but impose monetary costs and radiation risks that limit routine use. Reconstructing 3D geometry from a limited number of 2D views is challenging, as low-texture, non-rigid surfaces with few projections frequently lead to geometric collapse or loss of instance-specific detail. Few prior methods address breast reconstruction under sparse-view constraints while also supporting downstream morphological analysis. To overcome these limitations, we propose a flexible framework, MBRCNet, which combines multi-view feature fusion with dual 2D/3D supervision tailored to low-texture, non-rigid anatomy and supports downstream morphology classification from reconstructed shape representations. Experiments show that MBRCNet improves reconstruction fidelity over relevant baselines and that reconstructed 3D shapes provide promising features for exploratory morphological grouping for clinically meaningful classifications.
  • Item
    Deep Illumination–Guided Light Probe Placement
    (The Eurographics Association, 2026) Tarasidis, Andreas; Vasilakis, Andreas-Alexandros ; Fudos, Ioannis ; Gerrits, Tim; Teschner, Matthias
    This work proposes an automated learning-based strategy for computing light probe layouts efficiently under varied illumination conditions. A neural network model estimates the relative contribution of candidate probes, enabling the rapid construction of a compact configuration that maintains the scene’s indirect lighting distribution. Evaluations on complex environments indicate that the method achieves substantial speedups over conventional placement methods without compromising illumination fidelity.
  • Item
    Compressing Double-Phase Holograms using 2D Gaussians
    (The Eurographics Association, 2026) Fan, Xiaoyue ; Zhan, Yicheng ; Mazumdar, Amrita ; Akşit, Kaan ; Gerrits, Tim; Teschner, Matthias
    Effective compression of double-phase holograms remains an unresolved challenge due to their high-frequency nature, impeding the practicality of holographic displays. To address this challenge, we propose a hologram compression method by modifying the GaussianImage. Our method decomposes phase-only holograms into two components based on their intrinsic checkerboard pattern, separately optimizing each with a reduced set of 2D Gaussians. Our best case reduces the primitive count to only 3% of the baseline, achieving a compression ratio of 26% while preserving Mean PSNR = 43.39 dB in the reconstructed scenes.
  • Item
    Real-Time Angular Color Shift Compensation for On-Set Virtual Production
    (The Eurographics Association, 2026) Beck, Christopher ; Schattkowsky, Tim ; Albertz, Stefan; Gerrits, Tim; Teschner, Matthias
    On-set virtual production (OSVP) uses LED volumes to display real-time rendered backgrounds driven by camera pose and viewing direction. The camera-visible region, known as the inner frustum, is rendered at higher fidelity but is particularly affected by angular color shift at oblique viewing angles. Current calibration frameworks focus only on static angle-independent compensation of errors in color rendition. In our approach, we use a robogoniometric setup to measure the far-field colorimetric behavior of LED panels and derive a lightweight, angle-dependent color profile for the LED wall that can be directly used for real-time angular color shift correction.
  • Item
    Still2Scene: Hybrid Gaussian Environments for Virtual Production
    (The Eurographics Association, 2026) Sun, Xiaohan ; O'Sullivan, Carol ; Gerrits, Tim; Teschner, Matthias
    Virtual production often requires rapidly generated background environments that support real-time rendering and limited camera motion. While volumetric 3D Gaussian splatting provides high visual fidelity, it is computationally expensive, whereas planar billboard representations are efficient but lack geometric depth. We present a hybrid Gaussian scene representation that converts a single image into a lightweight navigable environment for virtual production. The proposed approach combines volumetric foreground Gaussians with planar Gaussian primitives for distant regions, while additional billboard assets can be synthesized using diffusion-based image prompting. The resulting hybrid scene can be deployed directly in Unreal® Engine for real-time rendering. This workflow enables fast environment generation from images and provides a practical middle-ground between static backplates and fully authored 3D environments in virtual production pipelines.
  • Item
    Semi-Automatic View-Based Segmentation of Gaussian Splat Scenes
    (The Eurographics Association, 2026) Bisgaard, Mathias; Møller, Frederik; Nielsen, Jonas Moody; Mørch, Katrine; Baran, Samuel; Gaarsdal, Jesper; Nikolov, Ivan ; Madsen, Claus ; Gerrits, Tim; Teschner, Matthias
    Gaussian Splatting (GS) has become a widely utilized method for the visualization of highly detailed 3D scenes, capturing small details, surface material information, light interactions, and complex surface shapes. A byproduct of the way GS are generated leaves a large amount of noise around the captured objects and surfaces, making the initial captures unusable without extensive post-processing, often performed manually. Furthermore, selecting and isolating parts of GS reconstructions can be challenging for asset creation. In this paper, we present our initial work on a human-in-the-loop view-based GS segmentation pipeline. We test our system on additional GS scenes and demonstrate that it consistently reduces background noisy splats and can be used to create GS assets. Anonymized code for the prototype: https://github.com/Bisgaardo/Gaussian-Splatting-Segmentation-Project
  • Item
    Decoupled Reprojection Consistency for Diagnosing 3D Gaussian Splatting Failures
    (The Eurographics Association, 2026) Park, Jin-Hyeong ; Gerrits, Tim; Teschner, Matthias
    This paper introduces Decoupled Reprojection Consistency (DRC), a training-free diagnostic for 3D Gaussian Splatting (3DGS) that decomposes an ambiguous reprojection error into three interpretable maps: a geometry/visibility mismatch Edepth, a base photometric inconsistency Ebase computed under SH truncation (L=0), and a view-dependent residual Evd. By comparing cross-view reprojection errors from zeroth-order (base) and trained-degree (full) SH renders of the same model, DRC separates geometry/visibility-driven inconsistency from appearance-driven instability. Paired with a bivariate fingerprint (quadrant occupancy) and coverage reporting, DRC turns reprojection consistency into actionable per-pixel triage. The method is demonstrated on synthetic and real scenes, separating glossy regions from geometry failures in our examples.
  • Item
    Opacity-Based Occlusion Culling for 3D Gaussian Splatting
    (The Eurographics Association, 2026) Giannone, Matteo; Ibrahim, Mohamed; Liu, Yang ; Gerrits, Tim; Teschner, Matthias
    We present an occlusion culling pipeline for 3D Gaussian Splatting (3DGS) that reduces rendering cost while preserving visual fidelity. Our two-stage framework combines a coarse stage—which uses an opacity volume and hierarchical occlusion maps to cull Gaussians invisible from the current viewpoint—with a fine stage that partitions Gaussians into depth-ordered batches and discards those projecting to fully opaque pixels. Early culling prior to sorting reduces the workload for downstream sorting, projection, and blending, cutting per-frame computation and memory bandwidth. The pipeline scales to millions of Gaussians, making it practical for real-time 3DGS rendering.
  • Item
    Smaller and Faster 3DGS via Post-Training Dictionary Learning
    (The Eurographics Association, 2026) Gong, Jiarong; Unger, Jonas ; Miandji, Ehsan ; Gerrits, Tim; Teschner, Matthias
    3D Gaussian Splatting (3DGS) suffers from large memory footprints. Existing compression techniques often lead to architectures with several additional trainable parameters and noticeable drops in rendering performance. We introduce the first dictionary-learning-based compression framework for 3DGS. Our compression framework is straightforward to implement, yet provides significant compression capabilities, preserves image quality, and improves real-time rendering performance. Across 13 benchmark scenes, our approach achieves an average compression ratio of 3.95×, 3.10×, and 4.55× when applied to 3DGS, 3DGS-MCMC, and PixelGS, respectively. This yields consistent rendering speedups of 23.3%, 24.3%, and 25.3%, while maintaining image quality.
  • Item
    EUROGRAPHICS 2026: Posters Frontmatter
    (The Eurographics Association, 2026) Gerrits, Tim; Teschner, Matthias; Gerrits, Tim; Teschner, Matthias