Self-Supervised Neural Global Illumination for Stereo-Rendering

Loading...
Thumbnail Image
Date
2025
Journal Title
Journal ISSN
Volume Title
Publisher
The Eurographics Association
Abstract
We propose a novel neural global illumination baking method for real-time stereoscopic rendering, with applications to virtual reality. Naively, applying neural global illumination to stereoscopic rendering requires running the model per eye, which doubles the computational cost making it infeasible for real-time virtual reality applications. Training a stereoscopic model from scratch is also impractical, as it will require additional path tracing ground truth for both eyes. We overcome these limitations by first training a common neural global illumination baking model using a single eye dataset. We then use self-supervised learning to train a second stereoscopic model using the first model as a teacher model, where we also transfer the weights of the first model to the second model to accelerate the training process. Furthermore, our spatial coherence loss encourages consistency between the rendering for two eyes. Experiments show our method achieves the same quality as the original single-eye model with minimal overhead, enabling real-time performance in virtual reality.
Description

CCS Concepts: Computing methodologies → Rendering; Neural networks; Human-centered computing → Virtual reality

        
@inproceedings{
10.2312:pg.20251305
, booktitle = {
Pacific Graphics Conference Papers, Posters, and Demos
}, editor = {
Christie, Marc
and
Han, Ping-Hsuan
and
Lin, Shih-Syun
and
Pietroni, Nico
and
Schneider, Teseo
and
Tsai, Hsin-Ruey
and
Wang, Yu-Shuen
and
Zhang, Eugene
}, title = {{
Self-Supervised Neural Global Illumination for Stereo-Rendering
}}, author = {
Zhang, Ziyang
and
Simo-Serra, Edgar
}, year = {
2025
}, publisher = {
The Eurographics Association
}, ISBN = {
978-3-03868-295-0
}, DOI = {
10.2312/pg.20251305
} }
Citation