Search Results

Now showing 1 - 10 of 11
  • Item
    Irradiance Gradients in the Presence of Participating Media and Occlusions
    (The Eurographics Association and Blackwell Publishing Ltd, 2008) Jarosz, Wojciech; Zwicker, Matthias; Jensen, Henrik Wann
    In this paper we present a technique for computing translational gradients of indirect surface reflectance in scenes containing participating media and significant occlusions. These gradients describe how the incident radiance field changes with respect to translation on surfaces. Previous techniques for computing gradients ignore the effects of volume scattering and attenuation and assume that radiance is constant along rays connecting surfaces. We present a novel gradient formulation that correctly captures the influence of participating media. Our formulation accurately accounts for changes of occlusion, including the effect of surfaces occluding scattering media. We show how the proposed gradients can be used within an irradiance caching framework to more accurately handle scenes with participating media, providing significant improvements in interpolation quality.
  • Item
    Rendering Translucent Materials Using Photon Diffusion
    (The Eurographics Association, 2007) Donner, Craig; Jensen, Henrik Wann; Jan Kautz and Sumanta Pattanaik
    We present a new algorithm for rendering translucent materials that combines photon tracing with diffusion. This combination makes it possible to efficiently render highly scattering translucent materials while accounting for internal blockers, complex geometry, translucent inter-scattering, and transmission and refraction of light at the boundary causing internal caustics. These effects cannot be accounted for with previous rendering approaches using the dipole or multipole diffusion approximations that only sample the incident illumination at the surface of the material. Instead of sampling lighting at the surface we trace photons into the material and store them volumetrically at their first scattering interaction with the material. We hierarchically integrate the diffusion of light from the photons to compute the radiant emittance at points on the surface of the material. For increased accuracy we use the incidence plane of the photon and the viewpoint on the surface to blend between three analytic diffusion approximations that best describe the geometric configuration between the photon and the shading point. For this purpose we introduce a new quadpole diffusion approximation that models diffusion at right angled edges, and an attenuation kernel to more accurately model multiple scattering near a light source. The photon diffusion approach is as efficient as previous Monte Carlo sampling approaches based on the dipole or multipole diffusion approximations, and our results demonstrate that it is more accurate and capable of capturing several illumination effects previously ignored when simulating the diffusion of light in translucent materials.
  • Item
    Deep Kernel Density Estimation for Photon Mapping
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Zhu, Shilin; Xu, Zexiang; Jensen, Henrik Wann; Su, Hao; Ramamoorthi, Ravi; Dachsbacher, Carsten and Pharr, Matt
    Recently, deep learning-based denoising approaches have led to dramatic improvements in low sample-count Monte Carlo rendering. These approaches are aimed at path tracing, which is not ideal for simulating challenging light transport effects like caustics, where photon mapping is the method of choice. However, photon mapping requires very large numbers of traced photons to achieve high-quality reconstructions. In this paper, we develop the first deep learning-based method for particlebased rendering, and specifically focus on photon density estimation, the core of all particle-based methods. We train a novel deep neural network to predict a kernel function to aggregate photon contributions at shading points. Our network encodes individual photons into per-photon features, aggregates them in the neighborhood of a shading point to construct a photon local context vector, and infers a kernel function from the per-photon and photon local context features. This network is easy to incorporate in many previous photon mapping methods (by simply swapping the kernel density estimator) and can produce high-quality reconstructions of complex global illumination effects like caustics with an order of magnitude fewer photons compared to previous photon mapping methods. Our approach largely reduces the required number of photons, significantly advancing the computational efficiency in photon mapping.
  • Item
    Importance Sampling Spherical Harmonics
    (The Eurographics Association and Blackwell Publishing Ltd, 2009) Jarosz, Wojciech; Carr, Nathan A.; Jensen, Henrik Wann
    In this paper we present the first practical method for importance sampling functions represented as spherical harmonics (SH). Given a spherical probability density function (PDF) represented as a vector of SH coefficients, our method warps an input point set to match the target PDF using hierarchical sample warping. Our approach is efficient and produces high quality sample distributions. As a by-product of the sampling procedure we produce a multi-resolution representation of the density function as either a spherical mip-map or Haar wavelet. By exploiting this implicit conversion we can extend the method to distribute samples according to the product of an SH function with a spherical mip-map or Haar wavelet. This generalization has immediate applicability in rendering, e.g., importance sampling the product of a BRDF and an environment map where the lighting is stored as a single high-resolution wavelet and the BRDF is represented in spherical harmonics. Since spherical harmonics can be efficiently rotated, this product can be computed on-the-fly even if the BRDF is stored in local-space. Our sampling approach generates over 6 million samples per second while significantly reducing precomputation time and storage requirements compared to previous techniques.
  • Item
    Practical Temporal and Stereoscopic Filtering for Real-time Ray Tracing
    (The Eurographics Association, 2023) Philippi, Henrik; Frisvad, Jeppe Revall; Jensen, Henrik Wann; Ritschel, Tobias; Weidlich, Andrea
    We present a practical method for temporal and stereoscopic filtering that generates stereo-consistent rendering. Existing methods for stereoscopic rendering often reuse samples from one eye for the other or do averaging between the two eyes. These approaches fail in the presence of ray tracing effects such as specular reflections and refractions. We derive a new blending strategy that leverages variance to compute per pixel blending weights for both temporal and stereoscopic rendering. In the temporal domain, our method works well in a low noise context and is robust in the presence of inconsistent motion vectors, where existing methods such as temporal anti-aliasing (TAA) and deep learning super sampling (DLSS) produce artifacts. In the stereoscopic domain, our method provides a new way to ensure consistency between the left and right eyes. The stereoscopic version of our method can be used with our new temporal method or with existing methods such as DLSS and TAA. In all combinations, it reduces the error and significantly increases the consistency between the eyes making it practical for real-time settings such as virtual reality (VR).
  • Item
    Sparse Sampling for Image-Based SVBRDF Acquisition
    (The Eurographics Association, 2016) Yu, Jiyang; Xu, Zexiang; Mannino, Matteo; Jensen, Henrik Wann; Ramamoorthi, Ravi; Reinhard Klein and Holly Rushmeier
    We acquire the data-driven spatially-varying (SV)BRDF of a flat sample from only a small number of images (typically 20). We generalize the homogenous BRDF acquisition work of Nielsen et al., who derived an optimal minmal set of lighting/view directions, treating a 4 degree-of-freedom spherical gantry as a gonioreflectometer. In contrast, we benefit from using the full 2D camera image from the gantry to enable SVBRDF acquisition. Like Nielsen et al, our method is data-driven, based on the MERL database of isotropic BRDFs, and finds the optimal directions by minimizing the condition number of the acquisition matrix. We extend their approach to SVBRDFs by modifying the optimal incident/outgoing directions to avoid grazing angles that reduce resolution and make alignment of different views difficult. Another key practical issue is aligning multiple viewpoints, and correcting for near-field effects. We demonstrate our method on SVBRDF measurements of new flat materials, showing that full data-driven SVBRDF acquisition is now possible from a sparse set of only about 20 light-view pairs.
  • Item
    Progressive Denoising of Monte Carlo Rendered Images
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Firmino, Arthur; Frisvad, Jeppe Revall; Jensen, Henrik Wann; Chaine, Raphaëlle; Kim, Min H.
    Image denoising based on deep learning has become a powerful tool to accelerate Monte Carlo rendering. Deep learning techniques can produce smooth images using a low sample count. Unfortunately, existing deep learning methods are biased and do not converge to the correct solution as the number of samples increase. In this paper, we propose a progressive denoising technique that aims to use denoising only when it is beneficial and to reduce its impact at high sample counts. We use Stein's unbiased risk estimate (SURE) to estimate the error in the denoised image, and we combine this with a neural network to infer a per-pixel mixing parameter. We further augment this network with confidence intervals based on classical statistics to ensure consistency and convergence of the final denoised image. Our results demonstrate that our method is consistent and that it improves existing denoising techniques. Furthermore, it can be used in combination with existing high quality denoisers to ensure consistency. In addition to being asymptotically unbiased, progressive denoising is particularly good at preserving fine details that would otherwise be lost with existing denoisers.
  • Item
    Practical Ply-Based Appearance Modeling for Knitted Fabrics
    (The Eurographics Association, 2021) Montazeri, Zahra; Gammelmark, Søren; Jensen, Henrik Wann; Zhao, Shuang; Bousseau, Adrien and McGuire, Morgan
    Abstract Modeling the geometry and the appearance of knitted fabrics has been challenging due to their complex geometries and interactions with light. Previous surface-based models have difficulties capturing fine-grained knit geometries; Micro-appearance models, on the other hands, typically store individual cloth fibers explicitly and are expensive to be generated and rendered. Further, neither of the models offers the flexibility to accurately capture both the reflection and the transmission of light simultaneously. In this paper, we introduce an efficient technique to generate knit models with user-specified knitting patterns. Our model stores individual knit plies with fiber-level detailed depicted using normal and tangent mapping. We evaluate our generated models using a wide array of knitting patterns. Further, we compare qualitatively renderings to our models to photos of real samples.
  • Item
    The Beam Radiance Estimate for Volumetric Photon Mapping
    (The Eurographics Association and Blackwell Publishing Ltd, 2008) Jarosz, Wojciech; Zwicker, Matthias; Jensen, Henrik Wann
    We present a new method for efficiently simulating the scattering of light within participating media. Using a theoretical reformulation of volumetric photon mapping, we develop a novel photon gathering technique for participating media. Traditional volumetric photon mapping samples the in-scattered radiance at numerous points along the length of a single ray by performing costly range queries within the photon map. Our technique replaces these multiple point-queries with a single beam-query, which explicitly gathers all photons along the length of an entire ray. These photons are used to estimate the accumulated in-scattered radiance arriving from a particular direction and need to be gathered only once per ray. Our method handles both fixed and adaptive kernels, is faster than regular volumetric photon mapping, and produces images with less noise.
  • Item
    A Physically-Based BSDF for Modeling the Appearance of Paper
    (The Eurographics Association and John Wiley and Sons Ltd., 2014) Papas, Marios; Mesa, Krystle de; Jensen, Henrik Wann; Wojciech Jarosz and Pieter Peers
    We present a novel appearance model for paper. Based on our appearance measurements for matte and glossy paper, we find that paper exhibits a combination of subsurface scattering, specular reflection, retroreflection, and surface sheen. Classic microfacet and simple diffuse reflection models cannot simulate the double-sided appearance of a thin layer. Our novel BSDF model matches our measurements for paper and accounts for both reflection and transmission properties. At the core of the BSDF model is a method for converting a multi-layer subsurface scattering model (BSSRDF) into a BSDF, which allows us to retain physically-based absorption and scattering parameters obtained from the measurements. We also introduce a method for computing the amount of light available for subsurface scattering due to transmission through a rough dielectric surface. Our final model accounts for multiple scattering, single scattering, and surface reflection and is capable of rendering paper with varying levels of roughness and glossiness on both sides.