Search Results

Now showing 1 - 10 of 23
  • Item
    Perceived Quality of BRDF Models
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Kavoosighafi, Behnaz; Mantiuk, Rafal K.; Hajisharif, Saghi; Miandji, Ehsan; Unger, Jonas; Wang, Beibei; Wilkie, Alexander
    Material appearance is commonly modeled with the Bidirectional Reflectance Distribution Functions (BRDFs), which need to trade accuracy for complexity and storage cost. To investigate the current practices of BRDF modeling, we collect the first high dynamic range stereoscopic video dataset that captures the perceived quality degradation with respect to a number of parametric and non-parametric BRDF models. Our dataset shows that the current loss functions used to fit BRDF models, such as mean-squared error of logarithmic reflectance values, correlate poorly with the perceived quality of materials in rendered videos. We further show that quality metrics that compare rendered material samples give a significantly higher correlation with subjective quality judgments, and a simple Euclidean distance in the ITP color space (DEITP) shows the highest correlation. Additionally, we investigate the use of different BRDF-space metrics as loss functions for fitting BRDF models and find that logarithmic mapping is the most effective approach for BRDF-space loss functions.
  • Item
    Real-time Level-of-detail Strand-based Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Huang, Tao; Zhou, Yang; Lin, Daqi; Zhu, Junqiu; Yan, Ling-Qi; Wu, Kui; Wang, Beibei; Wilkie, Alexander
    We present a real-time strand-based rendering framework that ensures seamless transitions between different level-of-detail (LoD) while maintaining a consistent appearance. We first introduce an aggregated BCSDF model to accurately capture both single and multiple scattering within the cluster for hairs and fibers. Building upon this, we further introduce a LoD framework for hair rendering that dynamically, adaptively, and independently replaces clusters of individual hairs with thick strands based on their projected screen widths. Through tests on diverse hairstyles with various hair colors and animation, as well as knit patches, our framework closely replicates the appearance of multiple-scattered full geometries at various viewing distances, achieving up to a 13× speedup.
  • Item
    Real-Time Importance Deep Shadows Maps with Hardware Ray Tracing
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Kern, René; Brüll, Felix; Grosch, Thorsten; Wang, Beibei; Wilkie, Alexander
    Rendering shadows for semi-transparent objects like smoke significantly enhances the realism of the final image. With advancements in ray tracing hardware, tracing visibility rays in real time has become possible. However, generating shadows for semi-transparent objects requires evaluating multiple or all intersections along the ray, resulting in a deep shadow ray. Deep Shadow Maps (DSM) offer an alternative but are constrained by their fixed resolution. We introduce Importance Deep Shadow Maps (IDSM), a real-time algorithm that adaptively distributes Deep Shadow samples based on importance captured from the current camera viewport. Additionally, we propose a novel DSM data structure built on the ray tracing acceleration structure, improving performance for scenarios requiring many samples per DSM texel. Our IDSM approach achieves speedups of up to ×6.89 compared to hardware ray tracing while maintaining a nearly indistinguishable quality level.
  • Item
    SPaGS: Fast and Accurate 3D Gaussian Splatting for Spherical Panoramas
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Li, Junbo; Hahlbohm, Florian; Scholz, Timon; Eisemann, Martin; Tauscher, Jan-Philipp; Magnor, Marcus; Wang, Beibei; Wilkie, Alexander
    In this paper we propose SPaGS, a high-quality, real-time free-viewpoint rendering approach from 360-degree panoramic images. While existing methods building on Neural Radiance Fields or 3D Gaussian Splatting have difficulties to achieve real-time frame rates and high-quality results at the same time, SPaGS combines the advantages of an explicit 3D Gaussian-based scene representation and ray casting-based rendering to attain fast and accurate results. Central to our new approach is the exact calculation of axis-aligned bounding boxes for spherical images that significantly accelerates omnidirectional ray casting of 3D Gaussians. We also present a new dataset consisting of ten real-world scenes recorded with a drone that incorporates both calibrated 360-degree panoramic images as well as perspective images captured simultaneously, i.e., with the same flight trajectory. Our evaluation on this new dataset as well as established benchmarks demonstrates that SPaGS excels over state-of-the-art methods in terms of both rendering quality and speed.
  • Item
    Multiview Geometric Regularization of Gaussian Splatting for Accurate Radiance Fields
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Kim, Jungeon; Park, Geonsoo; Lee, Seungyong; Wang, Beibei; Wilkie, Alexander
    Recent methods, such as 2D Gaussian Splatting and Gaussian Opacity Fields, have aimed to address the geometric inaccuracies of 3D Gaussian Splatting while retaining its superior rendering quality. However, these approaches still struggle to reconstruct smooth and reliable geometry, particularly in scenes with significant color variation across viewpoints, due to their per-point appearance modeling and single-view optimization constraints. In this paper, we propose an effective multiview geometric regularization strategy that integrates multiview stereo (MVS) depth, RGB, and normal constraints into Gaussian Splatting initialization and optimization. Our key insight is the complementary relationship between MVS-derived depth points and Gaussian Splatting-optimized positions: MVS robustly estimates geometry in regions of high color variation through local patch-based matching and epipolar constraints, whereas Gaussian Splatting provides more reliable and less noisy depth estimates near object boundaries and regions with lower color variation. To leverage this insight, we introduce a median depthbased multiview relative depth loss with uncertainty estimation, effectively integrating MVS depth information into Gaussian Splatting optimization. We also propose an MVS-guided Gaussian Splatting initialization to avoid Gaussians falling into suboptimal positions. Extensive experiments validate that our approach successfully combines these strengths, enhancing both geometric accuracy and rendering quality across diverse indoor and outdoor scenes.
  • Item
    A Wave-optics BSDF for Correlated Scatterers
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Yang, Ruomai; Kim, Juhyeon; Pediredla, Adithya; Jarosz, Wojciech; Wang, Beibei; Wilkie, Alexander
    We present a wave-optics-based BSDF for simulating the corona effect observed when viewing strong light sources through materials such as certain fabrics or glass surfaces with condensation. These visual phenomena arise from the interference of diffraction patterns caused by correlated, disordered arrangements of droplets or pores. Our method leverages the pair correlation function (PCF) to decouple the spatial relationships between scatterers from the diffraction behavior of individual scatterers. This two-level decomposition allows us to derive a physically based BSDF that provides explicit control over both scatterer shape and spatial correlation. We also introduce a practical importance sampling strategy for integrating our BSDF within a Monte Carlo renderer. Our simulation results and real-world comparisons demonstrate that the method can reliably reproduce the characteristics of the corona effects in various real-world diffractive materials.
  • Item
    VideoMat: Extracting PBR Materials from Video Diffusion Models
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Munkberg, Jacob; Wang, Zian; Liang, Ruofan; Shen, Tianchang; Hasselgren, Jon; Wang, Beibei; Wilkie, Alexander
    We leverage finetuned video diffusion models, intrinsic decomposition of videos, and physically-based differentiable rendering to generate high quality materials for 3D models given a text prompt or a single image. We condition a video diffusion model to respect the input geometry and lighting condition. This model produces multiple views of a given 3D model with coherent material properties. Secondly, we use a recent model to extract intrinsics (base color, roughness, metallic) from the generated video. Finally, we use the intrinsics alongside the generated video in a differentiable path tracer to robustly extract PBR materials directly compatible with common content creation tools.
  • Item
    Artist-Inator: Text-based, Gloss-aware Non-photorealistic Stylization
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Subias, Jose Daniel; Daniel-Soriano, Saúl; Gutierrez, Diego; Serrano, Ana; Wang, Beibei; Wilkie, Alexander
    Large diffusion models have made a remarkable leap synthesizing high-quality artistic images from text descriptions. However, these powerful pre-trained models still lack control to guide key material appearance properties, such as gloss. In this work, we present a threefold contribution: (1) we analyze how gloss is perceived across different artistic styles (i.e., oil painting, watercolor, ink pen, charcoal, and soft crayon); (2) we leverage our findings to create a dataset with 1,336,272 stylized images of many different geometries in all five styles, including automatically-computed text descriptions of their appearance (e.g., ''A glossy bunny hand painted with an orange soft crayon''); and (3) we train ControlNet to condition Stable Diffusion XL synthesizing novel painterly depictions of new objects, using simple inputs such as edge maps, hand-drawn sketches, or clip arts. Compared to previous approaches, our framework yields more accurate results despite the simplified input, as we show both quantitative and qualitatively.
  • Item
    DiffNEG: A Differentiable Rasterization Framework for Online Aiming Optimization in Solar Power Tower Systems
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Zheng, Cangping; Lin, Xiaoxia; Li, Dongshuai; Zhao, Yuhong; Feng, Jieqing; Wang, Beibei; Wilkie, Alexander
    Inverse rendering aims to infer scene parameters from observed images. In Solar Power Tower (SPT) systems, this corresponds to an aiming optimization problem-adjusting heliostats' orientations to shape the radiative flux density distribution (RFDD) on the receiver to conform to a desired distribution. The SPT system is widely favored in the field of renewable energy, where aiming optimization is crucial for ensuring its thermal efficiency and safety. However, traditional aiming optimization methods are inefficient and fail to meet online demands. In this paper, a novel optimization approach, DiffNEG, is proposed. DiffNEG introduces a differentiable rasterization method to model the reflected radiative flux of each heliostat as an elliptical Gaussian distribution. It leverages data-driven techniques to enhance simulation accuracy and employs automatic differentiation combined with gradient descent to achieve online, gradient-guided optimization in a continuous solution space. Experiments on a real large-scale heliostat field with nearly 30,000 heliostats demonstrate that DiffNEG can optimize within 10 seconds, improving efficiency by one order of magnitude compared to the latest DiffMCRT method and by three orders of magnitude compared to traditional heuristic methods, while also exhibiting superior robustness under both steady and transient state.
  • Item
    Neural Field Multi-view Shape-from-polarisation
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Wanaset, Rapee; Guarnera, Giuseppe Claudio; Smith, William A. P.; Wang, Beibei; Wilkie, Alexander
    We tackle the problem of multi-view shape-from-polarisation using a neural implicit surface representation and volume rendering of a polarised neural radiance field (P-NeRF). The P-NeRF predicts the parameters of a mixed diffuse/specular polarisation model. This directly relates polarisation behaviour to the surface normal without explicitly modelling illumination or BRDF. Via the implicit surface representation, this allows polarisation to directly inform the estimated geometry. This improves shape estimation and also allows separation of diffuse and specular radiance. For polarimetric images from division-of-focal-plane sensors, we fit directly to the raw data without first demosaicing. This avoids fitting to demosaicing artefacts and we propose losses and saturation masking specifically to handle HDR measurements. Our method achieves state-of-the-art performance on the PANDORA benchmark. We apply our method in a lightstage setting, providing single-shot face capture.