Rendering 2025 - Symposium Track

Permanent URI for this collection

Copenhagen, Denmark | 25 – 27 June 2025
(Rendering 2025 CGF papers are available here.)
Real-Time Rendering
Selective Caching in Procedural Texture Graphs for Path Tracing
Vincent Schüßler, Johannes Hanika, Basile Sauvage, Jean-Michel Dischler, and Carsten Dachsbacher
Spatio-Temporal Dithering for Order-Independent Transparency on Ray Tracing Hardware
Felix Brüll, René Kern, and Thorsten Grosch
Sampling and Guiding
Neural Path Guiding with Distribution Factorization
Pedro Figueiredo, Qihao He, and Nima Khademi Kalantari
Convergence Estimation of Markov-Chain Monte Carlo Rendering
Rui Yu, Guangzhong Sun, Shuang Zhao, and Yue Dong
Less can be more: A Footprint-driven Heuristic to skip Wasted Connections and Merges in Bidirectional Rendering
Ömercan Yazici, Pascal Grittmann, and Philipp Slusallek
Neural Resampling with Optimized Candidate Allocation
Alexander Rath, Marco Manzi, Sebastian Weiss, Tiziano Portenier, Farnood Salehi, Saeed Hadadan, and Marios Papas
Light and Brightness
A Divisive Normalization Brightness Model for Tone Mapping
Julian Ding and Peter Shirley
Temporal Brightness Management for Immersive Content
Luca Surace, Jorge Condor, and Piotr Didyk
Adaptive Multiple Control Variates for Many-Light Rendering
Xiaofeng Xu and Lu Wang
From Optical Measurement to Visual Comfort Analysis: a Complete Simulation Workflow with Oceanâ„¢'s Glare Map Post-processing
Oleksandra Bandeliuk, Grégoire Besse, Thomas Pierrard, and Estelle Berthier
Appearance Modelling
An evaluation of SVBRDF Prediction from Generative Image Models for Appearance Modeling of 3D Scenes
Alban Gauthier, Valentin Deschaintre, Alexandre Lanvin, Fredo Durand, Adrien Bousseau, and George Drettakis
A Controllable Appearance Representation for Flexible Transfer and Editing
Santiago Jimenez-Navarro, Julia Guerrero-Viu, and Belen Masia
Procedural Bump-based Defect Synthesis for Industrial Inspection
Runzhou Mao, Christoph Garth, and Petra Gospodnetic
Gaussians
Joint Gaussian Deformation in Triangle-Deformed Space for High-Fidelity Head Avatars
Jiawei Lu, Kunxin Guang, Conghui Hao, Kai Sun, Jian Yang, Jin Xie, and Beibei Wang
Content-Aware Texturing for Gaussian Splatting
Panagiotis Papantonakis, Georgios Kopanas, Frédo Durand, and George Drettakis
Stochastic Ray Tracing of Transparent 3D Gaussians
Xin Sun, Iliyan Georgiev, Yun (Raymond) Fei, and Milos Hasan
Uncertainty-Aware Gaussian Splatting with View-Dependent Regularization for High-Fidelity 3D Reconstruction
Shengjun Liu, Jiangxin Wu, Wenhui Wu, Lixiang Chu, and Xinru Liu
Stylization and Image Processing
Iterative Nonparametric Bayesian CP Decomposition for Hyperspectral Image Denoising
Wei Liu, Kaiwen Jiang, Jinzhi Lai, and Xuesong Zhang
BSDF Models and Scattering
Bidirectional Plateau-Border Scattering Distribution Function for Realistic and Efficient Foam Rendering
Ruizeng Li, Xinyang Liu, Runze Wang, Pengfei Shen, Ligang Liu, and Beibei Wang
Efficient Modeling and Rendering of Iridescence from Cholesteric Liquid Crystals
Gary Fourneau, Pascal Barla, and Romain Pacanowski
Differentiable Rendering
Sharpening Your Density Fields: Spiking Neuron Aided Fast Geometry Learning
Yi Gu, Zhaorui Wang, and Renjing Xu
Radiative Backpropagation with Non-Static Geometry
Markus Worchel, Ugo Finnendahl, and Marc Alexa
Differentiable Block Compression for Neural Texture
Tao Zhuang, Wentao Liu, and Ligang Liu

BibTeX (Rendering 2025 - Symposium Track)
@inproceedings{
10.2312:sr.20252018,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Wang, Beibei
and
Wilkie, Alexander
}, title = {{
Rendering 2025 Symposium Papers: Frontmatter}},
author = {
Wang, Beibei
and
Wilkie, Alexander
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-292-9},
DOI = {
10.2312/sr.20252018}
}
@inproceedings{
10.2312:sr.20251176,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Wang, Beibei
and
Wilkie, Alexander
}, title = {{
Selective Caching in Procedural Texture Graphs for Path Tracing}},
author = {
Schüßler, Vincent
and
Hanika, Johannes
and
Sauvage, Basile
and
Dischler, Jean-Michel
and
Dachsbacher, Carsten
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-292-9},
DOI = {
10.2312/sr.20251176}
}
@inproceedings{
10.2312:sr.20251177,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Wang, Beibei
and
Wilkie, Alexander
}, title = {{
Spatio-Temporal Dithering for Order-Independent Transparency on Ray Tracing Hardware}},
author = {
Brüll, Felix
and
Kern, René
and
Grosch, Thorsten
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-292-9},
DOI = {
10.2312/sr.20251177}
}
@inproceedings{
10.2312:sr.20251178,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Wang, Beibei
and
Wilkie, Alexander
}, title = {{
Neural Path Guiding with Distribution Factorization}},
author = {
Figueiredo, Pedro
and
He, Qihao
and
Kalantari, Nima Khademi
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-292-9},
DOI = {
10.2312/sr.20251178}
}
@inproceedings{
10.2312:sr.20251179,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Wang, Beibei
and
Wilkie, Alexander
}, title = {{
Convergence Estimation of Markov-Chain Monte Carlo Rendering}},
author = {
Yu, Rui
and
Sun, Guangzhong
and
Zhao, Shuang
and
Dong, Yue
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-292-9},
DOI = {
10.2312/sr.20251179}
}
@inproceedings{
10.2312:sr.20251180,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Wang, Beibei
and
Wilkie, Alexander
}, title = {{
Less can be more: A Footprint-driven Heuristic to skip Wasted Connections and Merges in Bidirectional Rendering}},
author = {
Yazici, Ömercan
and
Grittmann, Pascal
and
Slusallek, Philipp
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-292-9},
DOI = {
10.2312/sr.20251180}
}
@inproceedings{
10.2312:sr.20251181,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Wang, Beibei
and
Wilkie, Alexander
}, title = {{
Neural Resampling with Optimized Candidate Allocation}},
author = {
Rath, Alexander
and
Manzi, Marco
and
Weiss, Sebastian
and
Portenier, Tiziano
and
Salehi, Farnood
and
Hadadan, Saeed
and
Papas, Marios
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-292-9},
DOI = {
10.2312/sr.20251181}
}
@inproceedings{
10.2312:sr.20251182,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Wang, Beibei
and
Wilkie, Alexander
}, title = {{
A Divisive Normalization Brightness Model for Tone Mapping}},
author = {
Ding, Julian
and
Shirley, Peter
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-292-9},
DOI = {
10.2312/sr.20251182}
}
@inproceedings{
10.2312:sr.20251183,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Wang, Beibei
and
Wilkie, Alexander
}, title = {{
Temporal Brightness Management for Immersive Content}},
author = {
Surace, Luca
and
Condor, Jorge
and
Didyk, Piotr
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-292-9},
DOI = {
10.2312/sr.20251183}
}
@inproceedings{
10.2312:sr.20251184,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Wang, Beibei
and
Wilkie, Alexander
}, title = {{
Adaptive Multiple Control Variates for Many-Light Rendering}},
author = {
Xu, Xiaofeng
and
Wang, Lu
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-292-9},
DOI = {
10.2312/sr.20251184}
}
@inproceedings{
10.2312:sr.20251185,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Wang, Beibei
and
Wilkie, Alexander
}, title = {{
From Optical Measurement to Visual Comfort Analysis: a Complete Simulation Workflow with Oceanâ„¢'s Glare Map Post-processing}},
author = {
Bandeliuk, Oleksandra
and
Besse, Grégoire
and
Pierrard, Thomas
and
Berthier, Estelle
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-292-9},
DOI = {
10.2312/sr.20251185}
}
@inproceedings{
10.2312:sr.20251186,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Wang, Beibei
and
Wilkie, Alexander
}, title = {{
An evaluation of SVBRDF Prediction from Generative Image Models for Appearance Modeling of 3D Scenes}},
author = {
Gauthier, Alban
and
Deschaintre, Valentin
and
Lanvin, Alexandre
and
Durand, Fredo
and
Bousseau, Adrien
and
Drettakis, George
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-292-9},
DOI = {
10.2312/sr.20251186}
}
@inproceedings{
10.2312:sr.20251187,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Wang, Beibei
and
Wilkie, Alexander
}, title = {{
A Controllable Appearance Representation for Flexible Transfer and Editing}},
author = {
Jimenez-Navarro, Santiago
and
Guerrero-Viu, Julia
and
Masia, Belen
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-292-9},
DOI = {
10.2312/sr.20251187}
}
@inproceedings{
10.2312:sr.20251188,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Wang, Beibei
and
Wilkie, Alexander
}, title = {{
Procedural Bump-based Defect Synthesis for Industrial Inspection}},
author = {
Mao, Runzhou
and
Garth, Christoph
and
Gospodnetic, Petra
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-292-9},
DOI = {
10.2312/sr.20251188}
}
@inproceedings{
10.2312:sr.20251189,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Wang, Beibei
and
Wilkie, Alexander
}, title = {{
Joint Gaussian Deformation in Triangle-Deformed Space for High-Fidelity Head Avatars}},
author = {
Lu, Jiawei
and
Guang, Kunxin
and
Hao, Conghui
and
Sun, Kai
and
Yang, Jian
and
Xie, Jin
and
Wang, Beibei
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-292-9},
DOI = {
10.2312/sr.20251189}
}
@inproceedings{
10.2312:sr.20251190,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Wang, Beibei
and
Wilkie, Alexander
}, title = {{
Content-Aware Texturing for Gaussian Splatting}},
author = {
Papantonakis, Panagiotis
and
Kopanas, Georgios
and
Durand, Frédo
and
Drettakis, George
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-292-9},
DOI = {
10.2312/sr.20251190}
}
@inproceedings{
10.2312:sr.20251191,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Wang, Beibei
and
Wilkie, Alexander
}, title = {{
Stochastic Ray Tracing of Transparent 3D Gaussians}},
author = {
Sun, Xin
and
Georgiev, Iliyan
and
Fei, Yun (Raymond)
and
Hasan, Milos
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-292-9},
DOI = {
10.2312/sr.20251191}
}
@inproceedings{
10.2312:sr.20251192,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Wang, Beibei
and
Wilkie, Alexander
}, title = {{
Uncertainty-Aware Gaussian Splatting with View-Dependent Regularization for High-Fidelity 3D Reconstruction}},
author = {
Liu, Shengjun
and
Wu, Jiangxin
and
Wu, Wenhui
and
Chu, Lixiang
and
Liu, Xinru
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-292-9},
DOI = {
10.2312/sr.20251192}
}
@inproceedings{
10.2312:sr.20251193,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Wang, Beibei
and
Wilkie, Alexander
}, title = {{
Iterative Nonparametric Bayesian CP Decomposition for Hyperspectral Image Denoising}},
author = {
Liu, Wei
and
Jiang, Kaiwen
and
Lai, Jinzhi
and
Zhang, Xuesong
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-292-9},
DOI = {
10.2312/sr.20251193}
}
@inproceedings{
10.2312:sr.20251195,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Wang, Beibei
and
Wilkie, Alexander
}, title = {{
Bidirectional Plateau-Border Scattering Distribution Function for Realistic and Efficient Foam Rendering}},
author = {
Li, Ruizeng
and
Liu, Xinyang
and
Wang, Runze
and
Shen, Pengfei
and
Liu, Ligang
and
Wang, Beibei
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-292-9},
DOI = {
10.2312/sr.20251195}
}
@inproceedings{
10.2312:sr.20251196,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Wang, Beibei
and
Wilkie, Alexander
}, title = {{
Efficient Modeling and Rendering of Iridescence from Cholesteric Liquid Crystals}},
author = {
Fourneau, Gary
and
Barla, Pascal
and
Pacanowski, Romain
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-292-9},
DOI = {
10.2312/sr.20251196}
}
@inproceedings{
10.2312:sr.20251197,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Wang, Beibei
and
Wilkie, Alexander
}, title = {{
Sharpening Your Density Fields: Spiking Neuron Aided Fast Geometry Learning}},
author = {
Gu, Yi
and
Wang, Zhaorui
and
Xu, Renjing
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-292-9},
DOI = {
10.2312/sr.20251197}
}
@inproceedings{
10.2312:sr.20251198,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Wang, Beibei
and
Wilkie, Alexander
}, title = {{
Radiative Backpropagation with Non-Static Geometry}},
author = {
Worchel, Markus
and
Finnendahl, Ugo
and
Alexa, Marc
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-292-9},
DOI = {
10.2312/sr.20251198}
}
@inproceedings{
10.2312:sr.20251199,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Wang, Beibei
and
Wilkie, Alexander
}, title = {{
Differentiable Block Compression for Neural Texture}},
author = {
Zhuang, Tao
and
Liu, Wentao
and
Liu, Ligang
}, year = {
2025},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-292-9},
DOI = {
10.2312/sr.20251199}
}

Browse

Recent Submissions

Now showing 1 - 24 of 24
  • Item
    Rendering 2025 Symposium Papers: Frontmatter
    (The Eurographics Association, 2025) Wang, Beibei; Wilkie, Alexander; Wang, Beibei; Wilkie, Alexander
  • Item
    Selective Caching in Procedural Texture Graphs for Path Tracing
    (The Eurographics Association, 2025) Schüßler, Vincent; Hanika, Johannes; Sauvage, Basile; Dischler, Jean-Michel; Dachsbacher, Carsten; Wang, Beibei; Wilkie, Alexander
    Procedural texturing is crucial for adding details in large-scale rendering. Typically, procedural textures are represented as computational graphs that artists can edit. However, as scene and graph complexity grow, evaluating these graphs becomes increasingly expensive for the rendering system. Performance is greatly affected by the evaluation strategy: Precomputing textures into high resolution maps is straightforward but can be inefficient, while shade-on-hit architectures and tile-based caches improve efficiency by evaluating only necessary data. However, the ideal choice of strategy depends on the application context. We present a new method to dynamically select which texture graph nodes to cache within a rendering system that supports filtered texture graph evaluation and tile-based caching. Our method allows us to construct an optimized evaluation strategy for each scene. Cache-friendly nodes are identified using data-driven predictions based on statistics of requested texture footprints, gathered during a profiling phase. We develop a statistical model that fits profiling data and predicts how caching specific nodes affects evaluation efficiency and storage demands. Our approach can be directly integrated into a rendering system or used to analyze renderer data, helping practitioners to optimize performance in their workflows.
  • Item
    Spatio-Temporal Dithering for Order-Independent Transparency on Ray Tracing Hardware
    (The Eurographics Association, 2025) Brüll, Felix; Kern, René; Grosch, Thorsten; Wang, Beibei; Wilkie, Alexander
    Efficient rendering of many transparent surfaces is a challenging problem in real-time ray tracing. We introduce an alternative approach to conventional order-independent transparency (OIT) techniques: our method interprets the alpha channel as coverage and uses state-of-the-art temporal anti-aliasing techniques to accumulate transparency over multiple frames. By efficiently utilizing ray tracing hardware and its early ray termination capabilities, our method reduces computational costs compared to conventional OIT methods. Furthermore, our approach shades only one fragment per pixel, significantly lowering the shading workload and improving frame rate stability. Despite relying on temporal accumulation, our technique performs well in dynamic scenes.
  • Item
    Neural Path Guiding with Distribution Factorization
    (The Eurographics Association, 2025) Figueiredo, Pedro; He, Qihao; Kalantari, Nima Khademi; Wang, Beibei; Wilkie, Alexander
    In this paper, we present a neural path guiding method to aid with Monte Carlo (MC) integration in rendering. Existing neural methods utilize distribution representations that are either fast or expressive, but not both. We propose a simple, but effective, representation that is sufficiently expressive and reasonably fast. Specifically, we break down the 2D distribution over the directional domain into two 1D probability distribution functions (PDF). We propose to model each 1D PDF using a neural network that estimates the distribution at a set of discrete coordinates. The PDF at an arbitrary location can then be evaluated and sampled through interpolation. To train the network, we maximize the similarity of the learned and target distributions. To reduce the variance of the gradient during optimizations and estimate the normalization factor, we propose to cache the incoming radiance using an additional network. Through extensive experiments, we demonstrate that our approach is better than the existing methods, particularly in challenging scenes with complex light transport.
  • Item
    Convergence Estimation of Markov-Chain Monte Carlo Rendering
    (The Eurographics Association, 2025) Yu, Rui; Sun, Guangzhong; Zhao, Shuang; Dong, Yue; Wang, Beibei; Wilkie, Alexander
    We present a theoretical framework for estimating the convergence of Markov-Chain Monte Carlo (MCMC) rendering algorithms. Our theory considers both the variance and the correlation between samples, allowing for quantitative analyses of the convergence properties of MCMC estimators. With our theoretical framework, we devise a Monte Carlo (MC) algorithm capable of accurately estimating the expected MSE of an MCMC rendering algorithm. By adopting an efficient rejection sampling scheme, our MC-based MSE estimator yields a lower standard deviation compared to directly measuring the MSE by running the MCMC rendering algorithm multiple times. Moreover, we demonstrate that modifying the target distribution of the Markov chain by roughening the specular BRDF might lead to faster convergence on some scenarios. This finding suggests that our estimator can serve as a potential guide for selecting the target distribution.
  • Item
    Less can be more: A Footprint-driven Heuristic to skip Wasted Connections and Merges in Bidirectional Rendering
    (The Eurographics Association, 2025) Yazici, Ömercan; Grittmann, Pascal; Slusallek, Philipp; Wang, Beibei; Wilkie, Alexander
    Bidirectional rendering algorithms can robustly render a wide range of scenes and light transport effects. Their robustness stems from the fact that they combine a huge number of sampling techniques: Paths traced from the camera are combined with paths traced from the lights by connecting or merging their vertices in all possible combinations. The flip side of this robustness is that efficiency suffers because most of these connections and merges are not useful - their samples will have a weight close to zero. Skipping these wasted computations is hence desirable. Prior work has attempted this via manual parameter tuning, by classifying materials as ''specular'', ''glossy'', or ''diffuse'', or via costly data-driven adaptation. We, instead, propose a simple footprint-driven heuristic to selectively enable only the most impactful bidirectional techniques. Our heuristic is based only on readily available PDF values, does not require manual tuning, supports arbitrarily complex material systems, and does not require precomputation.
  • Item
    Neural Resampling with Optimized Candidate Allocation
    (The Eurographics Association, 2025) Rath, Alexander; Manzi, Marco; Weiss, Sebastian; Portenier, Tiziano; Salehi, Farnood; Hadadan, Saeed; Papas, Marios; Wang, Beibei; Wilkie, Alexander
    We propose a novel framework that accelerates Monte Carlo rendering with the help of machine learning. Unlike previous works that learn parametric distributions that can be sampled directly, our method learns the 5-dimensional unnormalized incident radiance field and samples its product with the material response (BRDF) through Resampled Importance Sampling. This allows for more flexible network architectures that can be used to improve upon existing path guiding approaches and can also be reused for other tasks such as radiance caching. To reduce the cost of resampling, we derive optimized spatially-varying candidate counts to maximize the efficiency of the render process. We designed our method to accelerate CPU production renders by benefiting from otherwise idle GPU resources without need of intrusive changes to the renderer. We compare our approach against state-of-the-art path guiding methods, both neural and non-neural, and demonstrate significant variance reduction at equal render times on production scenes.
  • Item
    A Divisive Normalization Brightness Model for Tone Mapping
    (The Eurographics Association, 2025) Ding, Julian; Shirley, Peter; Wang, Beibei; Wilkie, Alexander
    Tone mapping operators (TMOs) are essential in digital graphics, enabling the conversion of high-dynamic-range (HDR) scenes to the limited dynamic range reproducible by display devices, while simultaneously preserving the perceived qualities of the scene. An important aspect of perceived scene fidelity is brightness: the perceived luminance at every position in the scene. We introduce DINOS, a neurally inspired brightness model combining the multi-scale architecture of several historical models with a divisive normalization structure suggested by experimental results from recent studies on neural responses in the human visual pathway. We then evaluate the brightness perception predicted by DINOS against several well-known brightness illusions, as well as human preferences from an existing study which quantitatively ranks 14 popular TMOs. Finally, we propose BRONTO: a brightness-optimized TMO that directly leverages DINOS to perform locally varying exposure. We demonstrate BRONTO's efficacy on a variety of HDR scenes and compare its performance against several other contemporary TMOs.
  • Item
    Temporal Brightness Management for Immersive Content
    (The Eurographics Association, 2025) Surace, Luca; Condor, Jorge; Didyk, Piotr; Wang, Beibei; Wilkie, Alexander
    Modern virtual reality headsets demand significant computational resources to render high-resolution content in real-time. Therefore, prioritizing power efficiency becomes crucial, particularly for portable versions reliant on batteries. A significant portion of the energy consumed by these systems is attributed to their displays. Dimming the screen can save a considerable amount of energy; however, it may also result in a loss of visible details and contrast in the displayed content. While contrast may be partially restored by applying post-processing contrast enhancement steps, our work is orthogonal to these approaches, and focuses on optimal temporal modulation of screen brightness. We propose a technique that modulates brightness over time while minimizing the potential loss of visible details and avoiding noticeable temporal instability. Given a predetermined power budget and a video sequence, we achieve this by measuring contrast loss through band decomposition of the luminance image and optimizing the brightness level of each frame offline to ensure uniform temporal contrast loss. We evaluate our method through a series of subjective experiments and an ablation study, on a variety of content. We showcase its power-saving capabilities in practice using a built-in hardware proxy. Finally, we present an online version of our approach which further emphasizes the potential for low level vision models to be leveraged in power saving settings to preserve content quality.
  • Item
    Adaptive Multiple Control Variates for Many-Light Rendering
    (The Eurographics Association, 2025) Xu, Xiaofeng; Wang, Lu; Wang, Beibei; Wilkie, Alexander
    Monte Carlo integration estimates the path integral in light transport by randomly sampling light paths and averaging their contributions. However, in scenes with many lights, the resulting estimates suffer from noise and slow convergence due to highfrequency discontinuities introduced by complex light visibility, scattering functions, and emissive properties. To mitigate these challenges, control variates have been employed to approximate the integrand and reduce variance. While previous approaches have shown promise in direct illumination application, they struggle to efficiently handle the discontinuities inherent in manylight environments, especially when relying on a single control variate. In this work, we introduce an adaptive method that generates multiple control variates tailored to the spatial distribution and number of lights in the scene. Drawing inspiration from hierarchical light clustering methods like Lightcuts, our approach dynamically determines the number of control variates. We validate our method on the direct illumination problem in scenes with many lights, demonstrating that our adaptive multiple control variates not only outperform single control variate strategy but also achieve a modest improvement over current stateof- the-art many-light sampling techniques.
  • Item
    From Optical Measurement to Visual Comfort Analysis: a Complete Simulation Workflow with Oceanâ„¢'s Glare Map Post-processing
    (The Eurographics Association, 2025) Bandeliuk, Oleksandra; Besse, Grégoire; Pierrard, Thomas; Berthier, Estelle; Wang, Beibei; Wilkie, Alexander
    Lighting critically influences public safety and visual comfort across environments. Discomfort glare, in particular, poses a major challenge. We here introduce Oceanâ„¢'s glare map, a fast, high-fidelity glare evaluation tool that computes key indices (UGR, DGP, GR) through post-processing of spectral global illumination simulations. Beyond whole-scene assessments, our glare map tool uniquely offers per-source glare ratings, enabling precise design optimization. Through three practical use cases, we demonstrate the effectiveness of our tool for operational design and show how changes in illumination and material properties directly affect glare, supporting safer and more efficient lighting designs.
  • Item
    An evaluation of SVBRDF Prediction from Generative Image Models for Appearance Modeling of 3D Scenes
    (The Eurographics Association, 2025) Gauthier, Alban; Deschaintre, Valentin; Lanvin, Alexandre; Durand, Fredo; Bousseau, Adrien; Drettakis, George; Wang, Beibei; Wilkie, Alexander
    Digital content creation is experiencing a profound change with the advent of deep generative models. For texturing, conditional image generators now allow the synthesis of realistic RGB images of a 3D scene that align with the geometry of that scene. For appearance modeling, SVBRDF prediction networks recover material parameters from RGB images. Combining these technologies allows us to quickly generate SVBRDF maps for multiple views of a 3D scene, which can be merged to form a SVBRDF texture atlas of that scene. In this paper, we analyze the challenges and opportunities for SVBRDF prediction in the context of such a fast appearance modeling pipeline. On the one hand, single-view SVBRDF predictions might suffer from multiview incoherence and yield inconsistent texture atlases. On the other hand, generated RGB images, and the different modalities on which they are conditioned, can provide additional information for SVBRDF estimation compared to photographs. We compare neural architectures and conditions to identify designs that achieve high accuracy and coherence. We find that, surprisingly, a standard UNet is competitive with more complex designs.
  • Item
    A Controllable Appearance Representation for Flexible Transfer and Editing
    (The Eurographics Association, 2025) Jimenez-Navarro, Santiago; Guerrero-Viu, Julia; Masia, Belen; Wang, Beibei; Wilkie, Alexander
    We present a method that computes an interpretable representation of material appearance within a highly compact, disentangled latent space. This representation is learned in a self-supervised fashion using a VAE-based model. We train our model with a carefully designed unlabeled dataset, avoiding possible biases induced by human-generated labels. Our model demonstrates strong disentanglement and interpretability by effectively encoding material appearance and illumination, despite the absence of explicit supervision. To showcase the capabilities of such a representation, we leverage it for two proof-of-concept applications: image-based appearance transfer and editing. Our representation is used to condition a diffusion pipeline that transfers the appearance of one or more images onto a target geometry, and allows the user to further edit the resulting appearance. This approach offers fine-grained control over the generated results: thanks to the well-structured compact latent space, users can intuitively manipulate attributes such as hue or glossiness in image space to achieve the desired final appearance.
  • Item
    Procedural Bump-based Defect Synthesis for Industrial Inspection
    (The Eurographics Association, 2025) Mao, Runzhou; Garth, Christoph; Gospodnetic, Petra; Wang, Beibei; Wilkie, Alexander
    Automated defect detection is critical for quality control, but collecting and annotating real-world defect images remains costly and time-consuming, motivating the use of synthetic data. Existing methods such as geometry-based modeling, normal maps, and image-based approaches often struggle to balance realism, efficiency, and scalability. We propose a procedural method for synthesizing small-scale surface defects using gradient-based bump mapping and triplanar projection. By perturbing surface normals at shading time, our approach enables parameterized control over diverse scratch and dent patterns, while avoiding mesh edits, UV mapping, or texture lookup. It also produces pixel-accurate defect masks for annotation. Experimental results show that our method achieves comparable visual quality to geometry-based modeling, with lower computational overhead and improved surface continuity over static normal maps. The method offers a lightweight and scalable solution for generating high-quality training data for industrial inspection tasks.
  • Item
    Joint Gaussian Deformation in Triangle-Deformed Space for High-Fidelity Head Avatars
    (The Eurographics Association, 2025) Lu, Jiawei; Guang, Kunxin; Hao, Conghui; Sun, Kai; Yang, Jian; Xie, Jin; Wang, Beibei; Wang, Beibei; Wilkie, Alexander
    Creating 3D human heads with mesoscale details and high-fidelity animation from monocular or sparse multi-view videos is challenging. While 3D Gaussian splatting (3DGS) has brought significant benefits into this task, due to its powerful representation ability and rendering speed, existing works still face several issues, including inaccurate and blurry deformation, and lack of detailed appearance, due to difficulties in complex deformation representation and unreasonable Gaussian placement. In this paper, we propose a joint Gaussian deformation method by decoupling the complex deformation into two simpler deformations, incorporating a learnable displacement map-guided Gaussian-triangle binding and a neural-based deformation refinement, improving the fidelity of animation and details of reconstructed head avatars. However, renderings of reconstructed head avatars at unseen views still show artifacts, due to overfitting on sparse input views. To address this issue, we leverage synthesized pseudo views rendered with fitted textured 3DMMs as priors to initialize Gaussians, which helps maintain a consistent and realistic appearance across various views. As a result, our method outperforms existing state-of-the-art approaches with about 4.3 dB PSNR in novel-view synthesis and about 0.9 dB PSNR in self-reenactment on multi-view video datasets. Our method also preserves high-frequency details, exhibits more accurate deformations, and significantly reduces artifacts in unseen views.
  • Item
    Content-Aware Texturing for Gaussian Splatting
    (The Eurographics Association, 2025) Papantonakis, Panagiotis; Kopanas, Georgios; Durand, Frédo; Drettakis, George; Wang, Beibei; Wilkie, Alexander
    Gaussian Splatting has become the method of choice for 3D reconstruction and real-time rendering of captured real scenes. However, fine appearance details need to be represented as a large number of small Gaussian primitives, which can be wasteful when geometry and appearance exhibit different frequency characteristics. Inspired by the long tradition of texture mapping, we propose to use texture to represent detailed appearance where possible. Our main focus is to incorporate per-primitive texture maps that adapt to the scene in a principled manner during Gaussian Splatting optimization. We do this by proposing a new appearance representation for 2D Gaussian primitives with textures where the size of a texel is bounded by the image sampling frequency and adapted to the content of the input images. We achieve this by adaptively upscaling or downscaling the texture resolution during optimization. In addition, our approach enables control of the number of primitives during optimization based on texture resolution. We show that our approach performs favorably in image quality and total number of parameters used compared to alternative solutions for textured Gaussian primitives.
  • Item
    Stochastic Ray Tracing of Transparent 3D Gaussians
    (The Eurographics Association, 2025) Sun, Xin; Georgiev, Iliyan; Fei, Yun (Raymond); Hasan, Milos; Wang, Beibei; Wilkie, Alexander
    3D Gaussian splatting has been widely adopted as a 3D representation for novel-view synthesis, relighting, and 3D generation tasks. It delivers realistic and detailed results through a collection of explicit 3D Gaussian primitives, each carrying opacity and view-dependent color. However, efficient rendering of many transparent primitives remains a significant challenge. Existing approaches either rasterize the Gaussians with approximate per-view sorting or rely on high-end RTX GPUs. This paper proposes a stochastic ray-tracing method to render 3D clouds of transparent primitives. Instead of processing all ray-Gaussian intersections in sequential order, each ray traverses the acceleration structure only once, randomly accepting and shading a single intersection (or N intersections, using a simple extension). This approach minimizes shading time and avoids primitive sorting along the ray, thereby minimizing register usage and maximizing parallelism even on low-end GPUs. The cost of rays through the Gaussian asset is comparable to that of standard mesh-intersection rays. The shading is unbiased and has low variance, as our stochastic acceptance achieves importance sampling based on accumulated weight. The alignment with Monte Carlo philosophy simplifies implementation and integration into a conventional path-tracing framework.
  • Item
    Uncertainty-Aware Gaussian Splatting with View-Dependent Regularization for High-Fidelity 3D Reconstruction
    (The Eurographics Association, 2025) Liu, Shengjun; Wu, Jiangxin; Wu, Wenhui; Chu, Lixiang; Liu, Xinru; Wang, Beibei; Wilkie, Alexander
    3D Gaussian Splatting (3DGS) has emerged as a groundbreaking paradigm for explicit scene representation, achieving photorealistic novel view synthesis with real-time rendering capabilities. However, reconstructing geometrically consistent and accurate surfaces under complex real-world scenarios remains a critical challenge. Current 3DGS frameworks primarily rely on photometric loss optimization, which often results in multi-view geometric inconsistencies and inadequate handling of texture-less regions due to two inherent limitations: 1) the absence of explicit geometric constraints during Gaussian parameter optimization, and 2) the lack of effective mechanisms to resolve multi-view geometric ambiguities. To address these challenges, we propose Uncertainty-Aware Gaussian Splatting (UA-GS), a novel framework that integrates geometric priors with view-dependent uncertainty modeling to explicitly capture and resolve multi-view inconsistencies. For efficient optimization of Gaussian attributes, our approach introduces a spherical harmonics-based uncertainty representation that dynamically models view-dependent geometric variations. Building on this framework, we further design uncertainty-aware optimization strategies. Extensive experiments on real-world and synthetic benchmarks demonstrate that our method significantly outperforms state-of-the-art 3DGS-based approaches in geometric accuracy while retaining competitive rendering quality. Code and data will be made available soon.
  • Item
    Iterative Nonparametric Bayesian CP Decomposition for Hyperspectral Image Denoising
    (The Eurographics Association, 2025) Liu, Wei; Jiang, Kaiwen; Lai, Jinzhi; Zhang, Xuesong; Wang, Beibei; Wilkie, Alexander
    Hyperspectral image (HSI) denoising relies on exploiting the multiway correlations hidden in the clean signals to discriminate between the randomness of measurement noise. This paper proposes a self-supervised model that has a three-layer algorithmic hierarchy to iteratively quest for the tensor decomposition based representation of the underlying HSI. The outer layer takes advantage of the non-local similarity of HSI via a simple but effective k-means++ algorithm to explore the patch-level correlation and yields clusters of patches with similar tensor ranks. The middle and inner layers consist of a Bayesian Nonparametric tensor decomposition framework. The middle one employs a multiplicative Gamma process prior for the low rank tensor decomposition, and a Gaussian-Wishart prior for a more flexible exploration of the correlations among the latent factor matrices. The inner layer is responsible for the finer regression of the residual multiway correlations leaked from the upper two layers. Our scheme also provides a principled and automatic solution to several practical HSI denoising factors, such as the noise level, the model complexity and the regularization weights. Extensive experiments validate that our method outperforms state-of-the-art methods on a series of HSI denoising metrics.
  • Item
    Bidirectional Plateau-Border Scattering Distribution Function for Realistic and Efficient Foam Rendering
    (The Eurographics Association, 2025) Li, Ruizeng; Liu, Xinyang; Wang, Runze; Shen, Pengfei; Liu, Ligang; Wang, Beibei; Wang, Beibei; Wilkie, Alexander
    Liquid foams are a common phenomenon in our daily life. In computer graphics, rendering realistic foams remains challenging due to their complex geometry and light interactions within the foam. While the structure of the liquid foams has been well studied in the field of physics, it's rarely leveraged for rendering, even though it is essential for achieving realistic appearances. In physics, the intersection of two bubbles creates a liquid-carrying channel known as the Plateau border (PB). In this paper, we introduce the Plateau border into liquid foam rendering by explicitly modeling it at the geometric level. Although modeling of PBs enhances visual realism with path tracing, it suffers from extensive rendering costs due to multiple scattering effects within the medium contained in the PB. To tackle this, we propose a novel scattering function that models the aggregation of scattering within the medium surrounded by a Plateau border, termed the bidirectional Plateau-border scattering distribution function (BPSDF). Since no analytical formulation can be derived for the BPSDF, we propose a neural representation, together with importance sampling and probability distribution functions, to enable Monte Carlo-based rendering. By integrating our BPSDF into path tracing, our method achieves both realistic and efficient rendering of liquid foams, producing images with high fidelity.
  • Item
    Efficient Modeling and Rendering of Iridescence from Cholesteric Liquid Crystals
    (The Eurographics Association, 2025) Fourneau, Gary; Barla, Pascal; Pacanowski, Romain; Wang, Beibei; Wilkie, Alexander
    We introduce a novel approach to the efficient modeling and rendering of Cholesteric Liquid Crystals (CLCs), materials known for producing colorful effects due to their helical molecular structure. CLCs reflect circularly-polarized light within specific spectral bands, making their accurate simulation challenging for realistic rendering in Computer Graphics. Using the two-wave approximation from the Photonics literature, we develop a piecewise spectral reflectance model that improves the understanding of how light interact with CLCs for arbitrary incident angles. Our reflectance model allows for more efficient spectral rendering and fast integration into RGB-based rendering engines. We show that our approach is able to reproduce the unique visual properties of both natural and man-made CLCs, while keeping the computation fast enough for interactive applications and avoiding potential spectral aliasing issues.
  • Item
    Sharpening Your Density Fields: Spiking Neuron Aided Fast Geometry Learning
    (The Eurographics Association, 2025) Gu, Yi; Wang, Zhaorui; Xu, Renjing; Wang, Beibei; Wilkie, Alexander
    Neural Radiance Fields (NeRF) have achieved remarkable progress in neural rendering. Extracting geometry from NeRF typically relies on the Marching Cubes algorithm, which uses a hand-crafted threshold to define the level set. However, this threshold-based approach requires laborious and scenario-specific tuning, limiting its practicality for real-world applications. In this work, we seek to enhance the efficiency of this method during the training time. To this end, we introduce a spiking neuron mechanism that dynamically adjusts the threshold, eliminating the need for manual selection. Despite its promise, directly training with the spiking neuron often results in model collapse and noisy outputs. To overcome these challenges, we propose a round-robin strategy that stabilizes the training process and enables the geometry network to achieve a sharper and more precise density distribution with minimal computational overhead. We validate our approach through extensive experiments on both synthetic and real-world datasets. The results show that our method significantly improves the performance of threshold-based techniques, offering a more robust and efficient solution for NeRF geometry extraction.
  • Item
    Radiative Backpropagation with Non-Static Geometry
    (The Eurographics Association, 2025) Worchel, Markus; Finnendahl, Ugo; Alexa, Marc; Wang, Beibei; Wilkie, Alexander
    Radiative backpropagation-based (RB) methods efficiently compute reverse-mode derivatives in physically-based differentiable rendering by simulating the propagation of differential radiance. A key assumption is that differential radiance is transported like normal radiance. We observe that this holds only when scene geometry is static and demonstrate that current implementations of radiative backpropagation produce biased gradients when scene parameters change geometry. In this work, we derive the differential transport equation without assuming static geometry. An immediate consequence is that the parameterization matters when the sampling process is not differentiated: only surface integrals allow a local formulation of the derivatives, i.e., one in which moving surfaces do not affect the entire path geometry. While considerable effort has been devoted to handling discontinuities resulting from moving geometry, we show that a biased interior derivative compromises even the simplest inverse rendering tasks, regardless of discontinuities. An implementation based on our derivation leads to systematic convergence to the reference solution in the same setting and provides unbiased RB interior derivatives for path-space differentiable rendering.
  • Item
    Differentiable Block Compression for Neural Texture
    (The Eurographics Association, 2025) Zhuang, Tao; Liu, Wentao; Liu, Ligang; Wang, Beibei; Wilkie, Alexander
    In real-time rendering, neural network models using neural textures (texture-form neural features) are increasingly applied. For high-memory scenarios like film-grade games, reducing neural texture memory overhead is critical. While neural textures can use hardware-accelerated block compression for memory savings and leverage hardware texture filtering for performance, mainstream block compression encoders only aim to minimize compression errors. This design may significantly increase neural network model loss.We propose a novel differentiable block compression (DBC) framework that integrates encoding and decoding into neural network optimization training. Compared with direct compression by mainstream encoders, end-to-end trained neural textures reduce model loss. The framework first enables differentiable encoding computation, then uses a compressionerror- based stochastic sampling strategy for encoding configuration selection. A Mixture of Partitions (MoP) module is introduced to reduce computational costs from multiple partition configurations. As DBC employs native block compression formats, inference maintains real-time performance.