20 results
Search Results
Now showing 1 - 10 of 20
Item Analytic Spectral Integration of Birefringence-Induced Iridescence(The Eurographics Association and John Wiley & Sons Ltd., 2019) Steinberg, Shlomi; Boubekeur, Tamy and Sen, PradeepOptical phenomena that are only observable in optically anisotropic materials are generally ignored in the computer graphics. However, such optical effects are not restricted to exotic materials and can also be observed with common translucent objects when optical anisotropy is induced, e.g. via mechanical stress. Furthermore accurate prediction and reproduction of those optical effects has important practical applications. We provide a short but complete analysis of the relevant electromagnetic theory of light propagation in optically anisotropic media and derive the full set of formulations required to render birefringent materials. We then present a novel method for spectral integration of refraction and reflection in an anisotropic slab. Our approach allows fast and robust rendering of birefringence-induced iridescence in a physically faithful manner and is applicable to both real-time and offline rendering.Item On-Site Example-Based Material Appearance Acquisition(The Eurographics Association and John Wiley & Sons Ltd., 2019) Lin, Yiming; Peers, Pieter; Ghosh, Abhijeet; Boubekeur, Tamy and Sen, PradeepWe present a novel example-based material appearance modeling method suitable for rapid digital content creation. Our method only requires a single HDR photograph of a homogeneous isotropic dielectric exemplar object under known natural illumination. While conventional methods for appearance modeling require prior knowledge on the object shape, our method does not, nor does it recover the shape explicitly, greatly simplifying on-site appearance acquisition to a lightweight photography process suited for non-expert users. As our central contribution, we propose a shape-agnostic BRDF estimation procedure based on binary RGB profile matching.We also model the appearance of materials exhibiting a regular or stationary texture-like appearance, by synthesizing appropriate mesostructure from the same input HDR photograph and a mesostructure exemplar with (roughly) similar features. We believe our lightweight method for on-site shape-agnostic appearance acquisition presents a suitable alternative for a variety of applications that require plausible ''rapid-appearance-modeling''.Item Deep-learning the Latent Space of Light Transport(The Eurographics Association and John Wiley & Sons Ltd., 2019) Hermosilla, Pedro; Maisch, Sebastian; Ritschel, Tobias; Ropinski, Timo; Boubekeur, Tamy and Sen, PradeepWe suggest a method to directly deep-learn light transport, i. e., the mapping from a 3D geometry-illumination-material configuration to a shaded 2D image. While many previous learning methods have employed 2D convolutional neural networks applied to images, we show for the first time that light transport can be learned directly in 3D. The benefit of 3D over 2D is, that the former can also correctly capture illumination effects related to occluded and/or semi-transparent geometry. To learn 3D light transport, we represent the 3D scene as an unstructured 3D point cloud, which is later, during rendering, projected to the 2D output image. Thus, we suggest a two-stage operator comprising a 3D network that first transforms the point cloud into a latent representation, which is later on projected to the 2D output image using a dedicated 3D-2D network in a second step. We will show that our approach results in improved quality in terms of temporal coherence while retaining most of the computational efficiency of common 2D methods. As a consequence, the proposed two stage-operator serves as a valuable extension to modern deferred shading approaches.Item Global Illumination Shadow Layers(The Eurographics Association and John Wiley & Sons Ltd., 2019) DESRICHARD, François; Vanderhaeghe, David; PAULIN, Mathias; Boubekeur, Tamy and Sen, PradeepComputer graphics artists often resort to compositing to rework light effects in a synthetic image without requiring a new render. Shadows are primary subjects of artistic manipulation as they carry important stylistic information while our perception is tolerant with their editing. In this paper we formalize the notion of global shadow, generalizing direct shadow found in previous work to a global illumination context. We define an object's shadow layer as the difference between two altered renders of the scene. A shadow layer contains the radiance lost on the camera film because of a given object. We translate this definition in the theoretical framework of Monte-Carlo integration, obtaining a concise expression of the shadow layer. Building on it, we propose a path tracing algorithm that renders both the original image and any number of shadow layers in a single pass: the user may choose to separate shadows on a per-object and per-light basis, enabling intuitive and decoupled edits.Item Flexible SVBRDF Capture with a Multi-Image Deep Network(The Eurographics Association and John Wiley & Sons Ltd., 2019) Deschaintre, Valentin; Aittala, Miika; Durand, Fredo; Drettakis, George; Bousseau, Adrien; Boubekeur, Tamy and Sen, PradeepEmpowered by deep learning, recent methods for material capture can estimate a spatially-varying reflectance from a single photograph. Such lightweight capture is in stark contrast with the tens or hundreds of pictures required by traditional optimization-based approaches. However, a single image is often simply not enough to observe the rich appearance of realworld materials. We present a deep-learning method capable of estimating material appearance from a variable number of uncalibrated and unordered pictures captured with a handheld camera and flash. Thanks to an order-independent fusing layer, this architecture extracts the most useful information from each picture, while benefiting from strong priors learned from data. The method can handle both view and light direction variation without calibration. We show how our method improves its prediction with the number of input pictures, and reaches high quality reconstructions with as little as 1 to 10 images - a sweet spot between existing single-image and complex multi-image approaches.Item Learned Fitting of Spatially Varying BRDFs(The Eurographics Association and John Wiley & Sons Ltd., 2019) Merzbach, Sebastian; Hermann, Max; Rump, Martin; Klein, Reinhard; Boubekeur, Tamy and Sen, PradeepThe use of spatially varying reflectance models (SVBRDF) is the state of the art in physically based rendering and the ultimate goal is to acquire them from real world samples. Recently several promising deep learning approaches have emerged that create such models from a few uncalibrated photos, after being trained on synthetic SVBRDF datasets. While the achieved results are already very impressive, the reconstruction accuracy that is achieved by these approaches is still far from that of specialized devices. On the other hand, fitting SVBRDF parameter maps to the gibabytes of calibrated HDR images per material acquired by state of the art high quality material scanners takes on the order of several hours for realistic spatial resolutions. In this paper, we present a first deep learning approach that is capable of producing SVBRDF parameter maps more than two orders of magnitude faster than state of the art approaches, while still providing results of equal quality and generalizing to new materials unseen during the training. This is made possible by training our network on a large-scale database of material scans that we have gathered with a commercially available SVBRDF scanner. In particular, we train a convolutional neural network to map calibrated input images to the 13 parameter maps of an anisotropic Ward BRDF, modified to account for Fresnel reflections, and evaluate the results by comparing the measured images against re-renderings from our SVBRDF predictions. The novel approach is extensively validated on real world data taken from our material database, which we make publicly available under https://cg.cs.uni-bonn.de/svbrdfs/.Item Ray Classification for Accelerated BVH Traversal(The Eurographics Association and John Wiley & Sons Ltd., 2019) Hendrich, Jakub; PospÃÅ¡il, Adam; Meister, Daniel; Bittner, JiÅ™Ã; Boubekeur, Tamy and Sen, PradeepFor ray tracing based methods, traversing a hierarchical acceleration data structure takes up a substantial portion of the total rendering time. We propose an additional data structure which cuts off large parts of the hierarchical traversal. We use the idea of ray classification combined with the hierarchical scene representation provided by a bounding volume hierarchy. We precompute short arrays of indices to subtrees inside the hierarchy and use them to initiate the traversal for a given ray class. This arrangement is compact enough to be cache-friendly, preventing the method from negating its traversal gains by excessive memory traffic. The method is easy to use with existing renderers which we demonstrate by integrating it to the PBRT renderer. The proposed technique reduces the number of traversal steps by 42% on average, saving around 15% of time of finding ray-scene intersection on average.Item Orthogonal Array Sampling for Monte Carlo Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2019) Jarosz, Wojciech; Enayet, Afnan; Kensler, Andrew; Kilpatrick, Charlie; Christensen, Per; Boubekeur, Tamy and Sen, PradeepWe generalize N-rooks, jittered, and (correlated) multi-jittered sampling to higher dimensions by importing and improving upon a class of techniques called orthogonal arrays from the statistics literature. Renderers typically combine or ''pad'' a collection of lower-dimensional (e.g. 2D and 1D) stratified patterns to form higher-dimensional samples for integration. This maintains stratification in the original dimension pairs, but looses it for all other dimension pairs. For truly multi-dimensional integrands like those in rendering, this increases variance and deteriorates its rate of convergence to that of pure random sampling. Care must therefore be taken to assign the primary dimension pairs to the dimensions with most integrand variation, but this complicates implementations. We tackle this problem by developing a collection of practical, in-place multi-dimensional sample generation routines that stratify points on all t-dimensional and 1-dimensional projections simultaneously. For instance, when t=2, any 2D projection of our samples is a (correlated) multi-jittered point set. This property not only reduces variance, but also simplifies implementations since sample dimensions can now be assigned to integrand dimensions arbitrarily while maintaining the same level of stratification. Our techniques reduce variance compared to traditional 2D padding approaches like PBRT's (0,2) and Stratified samplers, and provide quality nearly equal to state-of-the-art QMC samplers like Sobol and Halton while avoiding their structured artifacts as commonly seen when using a single sample set to cover an entire image. While in this work we focus on constructing finite sampling point sets, we also discuss potential avenues for extending our work to progressive sequences (more suitable for incremental rendering) in the future.Item Real-time Image-based Lighting of Microfacet BRDFs with Varying Iridescence(The Eurographics Association and John Wiley & Sons Ltd., 2019) Kneiphof, Tom; Golla, Tim; Klein, Reinhard; Boubekeur, Tamy and Sen, PradeepIridescence is a natural phenomenon that is perceived as gradual color changes, depending on the view and illumination direction. Prominent examples are the colors seen in oil films and soap bubbles. Unfortunately, iridescent effects are particularly difficult to recreate in real-time computer graphics. We present a high-quality real-time method for rendering iridescent effects under image-based lighting. Previous methods model dielectric thin-films of varying thickness on top of an arbitrary micro-facet model with a conducting or dielectric base material, and evaluate the resulting reflectance term, responsible for the iridescent effects, only for a single direction when using real-time image-based lighting. This leads to bright halos at grazing angles and over-saturated colors on rough surfaces, which causes an unnatural appearance that is not observed in ground truth data. We address this problem by taking the distribution of light directions, given by the environment map and surface roughness, into account when evaluating the reflectance term. In particular, our approach prefilters the first and second moments of the light direction, which are used to evaluate a filtered version of the reflectance term. We show that the visual quality of our approach is superior to the ones previously achieved, while having only a small negative impact on performance.Item Tessellated Shading Streaming(The Eurographics Association and John Wiley & Sons Ltd., 2019) Hladky, Jozef; Seidel, Hans-Peter; Steinberger, Markus; Boubekeur, Tamy and Sen, PradeepPresenting high-fidelity 3D content on compact portable devices with low computational power is challenging. Smartphones, tablets and head-mounted displays (HMDs) suffer from thermal and battery-life constraints and thus cannot match the render quality of desktop PCs and laptops. Streaming rendering enables to show high-quality content but can suffer from potentially high latency. We propose an approach to efficiently capture shading samples in object space and packing them into a texture. Streaming this texture to the client, we support temporal frame up-sampling with high fidelity, low latency and high mobility. We introduce two novel sample distribution strategies and a novel triangle representation in the shading atlas space. Since such a system requires dynamic parallelism, we propose an implementation exploiting the power of hardware-accelerated tessellation stages. Our approach allows fast de-coding and rendering of extrapolated views on a client device by using hardwareaccelerated interpolation between shading samples and a set of potentially visible geometry. A comparison to existing shading methods shows that our sample distributions allow better client shading quality than previous atlas streaming approaches and outperforms image-based methods in all relevant aspects.