Self-Supervised Neural Global Illumination for Stereo-Rendering
| dc.contributor.author | Zhang, Ziyang | en_US |
| dc.contributor.author | Simo-Serra, Edgar | en_US |
| dc.contributor.editor | Christie, Marc | en_US |
| dc.contributor.editor | Han, Ping-Hsuan | en_US |
| dc.contributor.editor | Lin, Shih-Syun | en_US |
| dc.contributor.editor | Pietroni, Nico | en_US |
| dc.contributor.editor | Schneider, Teseo | en_US |
| dc.contributor.editor | Tsai, Hsin-Ruey | en_US |
| dc.contributor.editor | Wang, Yu-Shuen | en_US |
| dc.contributor.editor | Zhang, Eugene | en_US |
| dc.date.accessioned | 2025-10-07T06:05:13Z | |
| dc.date.available | 2025-10-07T06:05:13Z | |
| dc.date.issued | 2025 | |
| dc.description.abstract | We propose a novel neural global illumination baking method for real-time stereoscopic rendering, with applications to virtual reality. Naively, applying neural global illumination to stereoscopic rendering requires running the model per eye, which doubles the computational cost making it infeasible for real-time virtual reality applications. Training a stereoscopic model from scratch is also impractical, as it will require additional path tracing ground truth for both eyes. We overcome these limitations by first training a common neural global illumination baking model using a single eye dataset. We then use self-supervised learning to train a second stereoscopic model using the first model as a teacher model, where we also transfer the weights of the first model to the second model to accelerate the training process. Furthermore, our spatial coherence loss encourages consistency between the rendering for two eyes. Experiments show our method achieves the same quality as the original single-eye model with minimal overhead, enabling real-time performance in virtual reality. | en_US |
| dc.description.sectionheaders | Posters and Demos | |
| dc.description.seriesinformation | Pacific Graphics Conference Papers, Posters, and Demos | |
| dc.identifier.doi | 10.2312/pg.20251305 | |
| dc.identifier.isbn | 978-3-03868-295-0 | |
| dc.identifier.pages | 2 pages | |
| dc.identifier.uri | https://doi.org/10.2312/pg.20251305 | |
| dc.identifier.uri | https://diglib.eg.org/handle/10.2312/pg20251305 | |
| dc.publisher | The Eurographics Association | en_US |
| dc.rights | Attribution 4.0 International License | |
| dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
| dc.subject | CCS Concepts: Computing methodologies → Rendering; Neural networks; Human-centered computing → Virtual reality | |
| dc.subject | Computing methodologies → Rendering | |
| dc.subject | Neural networks | |
| dc.subject | Human centered computing → Virtual reality | |
| dc.title | Self-Supervised Neural Global Illumination for Stereo-Rendering | en_US |
Files
Original bundle
1 - 1 of 1