Search Results

Now showing 1 - 5 of 5
  • Item
    On-Site Example-Based Material Appearance Acquisition
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Lin, Yiming; Peers, Pieter; Ghosh, Abhijeet; Boubekeur, Tamy and Sen, Pradeep
    We present a novel example-based material appearance modeling method suitable for rapid digital content creation. Our method only requires a single HDR photograph of a homogeneous isotropic dielectric exemplar object under known natural illumination. While conventional methods for appearance modeling require prior knowledge on the object shape, our method does not, nor does it recover the shape explicitly, greatly simplifying on-site appearance acquisition to a lightweight photography process suited for non-expert users. As our central contribution, we propose a shape-agnostic BRDF estimation procedure based on binary RGB profile matching.We also model the appearance of materials exhibiting a regular or stationary texture-like appearance, by synthesizing appropriate mesostructure from the same input HDR photograph and a mesostructure exemplar with (roughly) similar features. We believe our lightweight method for on-site shape-agnostic appearance acquisition presents a suitable alternative for a variety of applications that require plausible ''rapid-appearance-modeling''.
  • Item
    Flexible SVBRDF Capture with a Multi-Image Deep Network
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Deschaintre, Valentin; Aittala, Miika; Durand, Fredo; Drettakis, George; Bousseau, Adrien; Boubekeur, Tamy and Sen, Pradeep
    Empowered by deep learning, recent methods for material capture can estimate a spatially-varying reflectance from a single photograph. Such lightweight capture is in stark contrast with the tens or hundreds of pictures required by traditional optimization-based approaches. However, a single image is often simply not enough to observe the rich appearance of realworld materials. We present a deep-learning method capable of estimating material appearance from a variable number of uncalibrated and unordered pictures captured with a handheld camera and flash. Thanks to an order-independent fusing layer, this architecture extracts the most useful information from each picture, while benefiting from strong priors learned from data. The method can handle both view and light direction variation without calibration. We show how our method improves its prediction with the number of input pictures, and reaches high quality reconstructions with as little as 1 to 10 images - a sweet spot between existing single-image and complex multi-image approaches.
  • Item
    Learned Fitting of Spatially Varying BRDFs
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Merzbach, Sebastian; Hermann, Max; Rump, Martin; Klein, Reinhard; Boubekeur, Tamy and Sen, Pradeep
    The use of spatially varying reflectance models (SVBRDF) is the state of the art in physically based rendering and the ultimate goal is to acquire them from real world samples. Recently several promising deep learning approaches have emerged that create such models from a few uncalibrated photos, after being trained on synthetic SVBRDF datasets. While the achieved results are already very impressive, the reconstruction accuracy that is achieved by these approaches is still far from that of specialized devices. On the other hand, fitting SVBRDF parameter maps to the gibabytes of calibrated HDR images per material acquired by state of the art high quality material scanners takes on the order of several hours for realistic spatial resolutions. In this paper, we present a first deep learning approach that is capable of producing SVBRDF parameter maps more than two orders of magnitude faster than state of the art approaches, while still providing results of equal quality and generalizing to new materials unseen during the training. This is made possible by training our network on a large-scale database of material scans that we have gathered with a commercially available SVBRDF scanner. In particular, we train a convolutional neural network to map calibrated input images to the 13 parameter maps of an anisotropic Ward BRDF, modified to account for Fresnel reflections, and evaluate the results by comparing the measured images against re-renderings from our SVBRDF predictions. The novel approach is extensively validated on real world data taken from our material database, which we make publicly available under https://cg.cs.uni-bonn.de/svbrdfs/.
  • Item
    Real-time Image-based Lighting of Microfacet BRDFs with Varying Iridescence
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Kneiphof, Tom; Golla, Tim; Klein, Reinhard; Boubekeur, Tamy and Sen, Pradeep
    Iridescence is a natural phenomenon that is perceived as gradual color changes, depending on the view and illumination direction. Prominent examples are the colors seen in oil films and soap bubbles. Unfortunately, iridescent effects are particularly difficult to recreate in real-time computer graphics. We present a high-quality real-time method for rendering iridescent effects under image-based lighting. Previous methods model dielectric thin-films of varying thickness on top of an arbitrary micro-facet model with a conducting or dielectric base material, and evaluate the resulting reflectance term, responsible for the iridescent effects, only for a single direction when using real-time image-based lighting. This leads to bright halos at grazing angles and over-saturated colors on rough surfaces, which causes an unnatural appearance that is not observed in ground truth data. We address this problem by taking the distribution of light directions, given by the environment map and surface roughness, into account when evaluating the reflectance term. In particular, our approach prefilters the first and second moments of the light direction, which are used to evaluate a filtered version of the reflectance term. We show that the visual quality of our approach is superior to the ones previously achieved, while having only a small negative impact on performance.
  • Item
    Glint Rendering based on a Multiple-Scattering Patch BRDF
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Chermain, Xavier; Claux, Frédéric; Mérillou, Stéphane; Boubekeur, Tamy and Sen, Pradeep
    Rendering materials such as metallic paints, scratched metals and rough plastics requires glint integrators that can capture all micro-specular highlights falling into a pixel footprint, faithfully replicating surface appearance. Specular normal maps can be used to represent a wide range of arbitrary micro-structures. The use of normal maps comes with important drawbacks though: the appearance is dark overall due to back-facing normals and importance sampling is suboptimal, especially when the micro-surface is very rough. We propose a new glint integrator relying on a multiple-scattering patch-based BRDF addressing these issues. To do so, our method uses a modified version of microfacet-based normal mapping [SHHD17] designed for glint rendering, leveraging symmetric microfacets. To model multiple-scattering, we re-introduce the lost energy caused by a perfectly specular, single-scattering formulation instead of using expensive random walks. This reflectance model is the basis of our patch-based BRDF, enabling robust sampling and artifact-free rendering with a natural appearance. Additional calculation costs amount to about 40% in the worst cases compared to previous methods [YHMR16,CCM18].