Search Results

Now showing 1 - 10 of 45
  • Item
    Next Event Estimation++: Visibility Mapping for Efficient Light Transport Simulation
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Guo, Jerry Jinfeng; Eisemann, Martin; Eisemann, Elmar; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Monte-Carlo rendering requires determining the visibility between scene points as the most common and compute intense operation to establish paths between camera and light source. Unfortunately, many tests reveal occlusions and the corresponding paths do not contribute to the final image. In this work, we present next event estimation++ (NEE++): a visibility mapping technique to perform visibility tests in a more informed way by caching voxel to voxel visibility probabilities. We show two scenarios: Russian roulette style rejection of visibility tests and direct importance sampling of the visibility. We show applications to next event estimation and light sampling in a uni-directional path tracer, and light-subpath sampling in Bi-Directional Path Tracing. The technique is simple to implement, easy to add to existing rendering systems, and comes at almost no cost, as the required information can be directly extracted from the rendering process itself. It discards up to 80% of visibility tests on average, while reducing variance by ~20% compared to other state-of-the-art light sampling techniques with the same number of samples. It gracefully handles complex scenes with efficiency similar to Metropolis light transport techniques but with a more uniform convergence.
  • Item
    Correlation-Aware Multiple Importance Sampling for Bidirectional Rendering Algorithms
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Grittmann, Pascal; Georgiev, Iliyan; Slusallek, Philipp; Mitra, Niloy and Viola, Ivan
    Combining diverse sampling techniques via multiple importance sampling (MIS) is key to achieving robustness in modern Monte Carlo light transport simulation. Many such methods additionally employ correlated path sampling to boost efficiency. Photon mapping, bidirectional path tracing, and path-reuse algorithms construct sets of paths that share a common prefix. This correlation is ignored by classical MIS heuristics, which can result in poor technique combination and noisy images.We propose a practical and robust solution to that problem. Our idea is to incorporate correlation knowledge into the balance heuristic, based on known path densities that are already required for MIS. This correlation-aware heuristic can achieve considerably lower error than the balance heuristic, while avoiding computational and memory overhead.
  • Item
    Stratified Sampling of Projected Spherical Caps
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Ureña, Carlos; Georgiev, Iliyan; Jakob, Wenzel and Hachisuka, Toshiya
    We present a method for uniformly sampling points inside the projection of a spherical cap onto a plane through the sphere's center. To achieve this, we devise two novel area-preserving mappings from the unit square to this projection, which is often an ellipse but generally has a more complex shape. Our maps allow for low-variance rendering of direct illumination from finite and infinite (e.g. sun-like) spherical light sources by sampling their projected solid angle in a stratified manner. We discuss the practical implementation of our maps and show significant quality improvement over traditional uniform spherical cap sampling in a production renderer.
  • Item
    Temporally Reliable Motion Vectors for Real-time Ray Tracing
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Zeng, Zheng; Liu, Shiqiu; Yang, Jinglei; Wang, Lu; Yan, Ling-Qi; Mitra, Niloy and Viola, Ivan
    Real-time ray tracing (RTRT) is being pervasively applied. The key to RTRT is a reliable denoising scheme that reconstructs clean images from significantly undersampled noisy inputs, usually at 1 sample per pixel as limited by current hardware's computing power. The state of the art reconstruction methods all rely on temporal filtering to find correspondences of current pixels in the previous frame, described using per-pixel screen-space motion vectors. While these approaches are demonstrated powerful, they suffer from a common issue that the temporal information cannot be used when the motion vectors are not valid, i.e. when temporal correspondences are not obviously available or do not exist in theory. We introduce temporally reliable motion vectors that aim at deeper exploration of temporal coherence, especially for the generally-believed difficult applications on shadows, glossy reflections and occlusions, with the key idea to detect and track the cause of each effect. We show that our temporally reliable motion vectors produce significantly better temporal results on a variety of dynamic scenes when compared to the state of the art methods, but with negligible performance overhead.
  • Item
    Unsupervised Image Reconstruction for Gradient-Domain Volumetric Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Xu, Zilin; Sun, Qiang; Wang, Lu; Xu, Yanning; Wang, Beibei; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Gradient-domain rendering can highly improve the convergence of light transport simulation using the smoothness in image space. These methods generate image gradients and solve an image reconstruction problem with rendered image and the gradient images. Recently, a previous work proposed a gradient-domain volumetric photon density estimation for homogeneous participating media. However, the image reconstruction relies on traditional L1 reconstruction, which leads to obvious artifacts when only a few rendering passes are performed. Deep learning based reconstruction methods have been exploited for surface rendering, but they are not suitable for volume density estimation. In this paper, we propose an unsupervised neural network for image reconstruction of gradient-domain volumetric photon density estimation, more specifically for volumetric photon mapping, using a variant of GradNet with an encoded shift connection and a separated auxiliary feature branch, which includes volume based auxiliary features such as transmittance and photon density. Our network smooths the images on global scale and preserves the high frequency details on a small scale. We demonstrate that our network produces a higher quality result, compared to previous work. Although we only considered volumetric photon mapping, it's straightforward to extend our method for other forms, like beam radiance estimation.
  • Item
    Real-time Denoising Using BRDF Pre-integration Factorization
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Zhuang, Tao; Shen, Pengfei; Wang, Beibei; Liu, Ligang; Zhang, Fang-Lue and Eisemann, Elmar and Singh, Karan
    Path tracing has been used for real-time renderings, thanks to the powerful GPU device. Unfortunately, path tracing produces noisy rendered results, thus, filtering or denoising is often applied as a post-process to remove the noise. Previous works produce high-quality denoised results, by accumulating the temporal samples. However, they cannot handle the details from bidirectional reflectance distribution function (BRDF) maps (e.g. roughness map). In this paper, we introduce the BRDF preintegration factorization for denoising to better preserve the details from BRDF maps. More specifically, we reformulate the rendering equation into two components: the BRDF pre-integration component and the weighted-lighting component. The BRDF pre-integration component is noise-free, since it does not depend on the lighting. Another key observation is that the weighted-lighting component tends to be smooth and low-frequency, which indicates that it is more suitable for denoising than the final rendered image. Hence, the weighted-lighting component is denoised individually. Our BRDF pre-integration demodulation approach is flexible for many real-time filtering methods. We have implemented it in spatio-temporal varianceguided filtering (SVGF), ReLAX and ReBLUR. Compared to the original methods, our method manages to better preserve the details from BRDF maps, while both the memory and time cost are negligible.
  • Item
    Practical Face Reconstruction via Differentiable Ray Tracing
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Dib, Abdallah; Bharaj, Gaurav; Ahn, Junghyun; Thébault, Cédric; Gosselin, Philippe; Romeo, Marco; Chevallier, Louis; Mitra, Niloy and Viola, Ivan
    We present a differentiable ray-tracing based novel face reconstruction approach where scene attributes - 3D geometry, reflectance (diffuse, specular and roughness), pose, camera parameters, and scene illumination - are estimated from unconstrained monocular images. The proposed method models scene illumination via a novel, parameterized virtual light stage, which in-conjunction with differentiable ray-tracing, introduces a coarse-to-fine optimization formulation for face reconstruction. Our method can not only handle unconstrained illumination and self-shadows conditions, but also estimates diffuse and specular albedos. To estimate the face attributes consistently and with practical semantics, a two-stage optimization strategy systematically uses a subset of parametric attributes, where subsequent attribute estimations factor those previously estimated. For example, self-shadows estimated during the first stage, later prevent its baking into the personalized diffuse and specular albedos in the second stage. We show the efficacy of our approach in several real-world scenarios, where face attributes can be estimated even under extreme illumination conditions. Ablation studies, analyses and comparisons against several recent state-of-the-art methods show improved accuracy and versatility of our approach. With consistent face attributes reconstruction, our method leads to several style - illumination, albedo, self-shadow - edit and transfer applications, as discussed in the paper.
  • Item
    Global Illumination Shadow Layers
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) DESRICHARD, François; Vanderhaeghe, David; PAULIN, Mathias; Boubekeur, Tamy and Sen, Pradeep
    Computer graphics artists often resort to compositing to rework light effects in a synthetic image without requiring a new render. Shadows are primary subjects of artistic manipulation as they carry important stylistic information while our perception is tolerant with their editing. In this paper we formalize the notion of global shadow, generalizing direct shadow found in previous work to a global illumination context. We define an object's shadow layer as the difference between two altered renders of the scene. A shadow layer contains the radiance lost on the camera film because of a given object. We translate this definition in the theoretical framework of Monte-Carlo integration, obtaining a concise expression of the shadow layer. Building on it, we propose a path tracing algorithm that renders both the original image and any number of shadow layers in a single pass: the user may choose to separate shadows on a per-object and per-light basis, enabling intuitive and decoupled edits.
  • Item
    A New Microflake Model With Microscopic Self-shadowing for Accurate Volume Downsampling
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Loubet, Guillaume; Neyret, Fabrice; Gutierrez, Diego and Sheffer, Alla
    Naive linear methods for downsampling high-resolution microflake volumes often produce inaccurate appearance, especially when input voxels are very opaque. Preserving correct appearance at all resolutions requires taking into account maskingshadowing effects that occur between and inside dense input voxels. We introduce a new microflake model whose additional parameters characterize self-shadowing effects at a microscopic scale. We provide an anisotropic self-shadowing function and microflake distributions for which the scattering coefficients and the phase functions of our model have closed-form expressions. We use this model in a new downsampling approach in which scattering parameters are computed from local estimations of self-shadowing probabilities in the input volume. Unlike previous work, our method handles datasets with spatially varying scattering parameters, semi-transparent volumes and datasets with intricate silhouettes. We show that our method generates LoDs with correct transparency and consistent appearance through scales for a wide range of challenging datasets, allowing for huge memory savings and efficient distant rendering without loss of quality.
  • Item
    Gradient Outlier Removal for Gradient-Domain Path Tracing
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Ha, Saerom; Oh, Sojin; Back, Jonghee; Yoon, Sung-Eui; Moon, Bochang; Alliez, Pierre and Pellacini, Fabio
    We present a new outlier removal technique for a gradient-domain path tracing (G-PT) that computes image gradients as well as colors. Our approach rejects gradient outliers whose estimated errors are much higher than those of the other gradients for improving reconstruction quality for the G-PT. We formulate our outlier removal problem as a least trimmed squares optimization, which employs only a subset of gradients so that a final image can be reconstructed without including the gradient outliers. In addition, we design this outlier removal process so that the chosen subset of gradients maintains connectivity through gradients between pixels, preventing pixels from being isolated. Lastly, the optimal number of inlier gradients is estimated to minimize our reconstruction error. We have demonstrated that our reconstruction with robustly rejecting gradient outliers produces visually and numerically improved results, compared to the previous screened Poisson reconstruction that uses all the gradients.