Rendering 2025 - Symposium Track
Permanent URI for this collection
Browse
Browsing Rendering 2025 - Symposium Track by Subject "CCS Concepts: Computing methodologies -> Rendering"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Adaptive Multiple Control Variates for Many-Light Rendering(The Eurographics Association, 2025) Xu, Xiaofeng; Wang, Lu; Wang, Beibei; Wilkie, AlexanderMonte Carlo integration estimates the path integral in light transport by randomly sampling light paths and averaging their contributions. However, in scenes with many lights, the resulting estimates suffer from noise and slow convergence due to highfrequency discontinuities introduced by complex light visibility, scattering functions, and emissive properties. To mitigate these challenges, control variates have been employed to approximate the integrand and reduce variance. While previous approaches have shown promise in direct illumination application, they struggle to efficiently handle the discontinuities inherent in manylight environments, especially when relying on a single control variate. In this work, we introduce an adaptive method that generates multiple control variates tailored to the spatial distribution and number of lights in the scene. Drawing inspiration from hierarchical light clustering methods like Lightcuts, our approach dynamically determines the number of control variates. We validate our method on the direct illumination problem in scenes with many lights, demonstrating that our adaptive multiple control variates not only outperform single control variate strategy but also achieve a modest improvement over current stateof- the-art many-light sampling techniques.Item Joint Gaussian Deformation in Triangle-Deformed Space for High-Fidelity Head Avatars(The Eurographics Association, 2025) Lu, Jiawei; Guang, Kunxin; Hao, Conghui; Sun, Kai; Yang, Jian; Xie, Jin; Wang, Beibei; Wang, Beibei; Wilkie, AlexanderCreating 3D human heads with mesoscale details and high-fidelity animation from monocular or sparse multi-view videos is challenging. While 3D Gaussian splatting (3DGS) has brought significant benefits into this task, due to its powerful representation ability and rendering speed, existing works still face several issues, including inaccurate and blurry deformation, and lack of detailed appearance, due to difficulties in complex deformation representation and unreasonable Gaussian placement. In this paper, we propose a joint Gaussian deformation method by decoupling the complex deformation into two simpler deformations, incorporating a learnable displacement map-guided Gaussian-triangle binding and a neural-based deformation refinement, improving the fidelity of animation and details of reconstructed head avatars. However, renderings of reconstructed head avatars at unseen views still show artifacts, due to overfitting on sparse input views. To address this issue, we leverage synthesized pseudo views rendered with fitted textured 3DMMs as priors to initialize Gaussians, which helps maintain a consistent and realistic appearance across various views. As a result, our method outperforms existing state-of-the-art approaches with about 4.3 dB PSNR in novel-view synthesis and about 0.9 dB PSNR in self-reenactment on multi-view video datasets. Our method also preserves high-frequency details, exhibits more accurate deformations, and significantly reduces artifacts in unseen views.Item Procedural Bump-based Defect Synthesis for Industrial Inspection(The Eurographics Association, 2025) Mao, Runzhou; Garth, Christoph; Gospodnetic, Petra; Wang, Beibei; Wilkie, AlexanderAutomated defect detection is critical for quality control, but collecting and annotating real-world defect images remains costly and time-consuming, motivating the use of synthetic data. Existing methods such as geometry-based modeling, normal maps, and image-based approaches often struggle to balance realism, efficiency, and scalability. We propose a procedural method for synthesizing small-scale surface defects using gradient-based bump mapping and triplanar projection. By perturbing surface normals at shading time, our approach enables parameterized control over diverse scratch and dent patterns, while avoiding mesh edits, UV mapping, or texture lookup. It also produces pixel-accurate defect masks for annotation. Experimental results show that our method achieves comparable visual quality to geometry-based modeling, with lower computational overhead and improved surface continuity over static normal maps. The method offers a lightweight and scalable solution for generating high-quality training data for industrial inspection tasks.