Search Results

Now showing 1 - 10 of 34
  • Item
    A Practical Approach to Physically-Based Reproduction of Diffusive Cosmetics
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Kim, Goanghun; Ko, Hyeong-Seok; Fu, Hongbo and Ghosh, Abhijeet and Kopf, Johannes
    In this paper, we introduce so-called the bSX method as a new way to utilize the Kubelka-Munk (K-M) model. Assuming the material is completely diffusive, the K-M model gives the reflectance and transmittance of the material from the observation of the material applied on a backing, where the observation includes the thickness of the material application. By rearranging the original K-M equation, we propose that the reflectance and transmittance can be calculated without knowing the thickness. This is a practically useful contribution. Based on the above finding, we develop the bSX method which can (1) capture the material specific parameters from the two photos - taken before and after the material application, and (2) reproduce its effect on a novel backing. We experimented the proposed method in various cases related to virtual cosmetic try-on, which include (1) capture from a single color backing, (2) capture from human skin backing, (3) reproduction of varying thickness effect, (4) reproduction of multi-layer cosmetic application effect, (5) applying the proposed method to makeup transfer. Compared to previous image-based makeup transfer methods, the bSX method reproduces the feel of the cosmetics more accurately.
  • Item
    High Dynamic Range Point Clouds for Real-Time Relighting
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Sabbadin, Manuele; Palma, Gianpaolo; BANTERLE, FRANCESCO; Boubekeur, Tamy; Cignoni, Paolo; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Acquired 3D point clouds make possible quick modeling of virtual scenes from the real world.With modern 3D capture pipelines, each point sample often comes with additional attributes such as normal vector and color response. Although rendering and processing such data has been extensively studied, little attention has been devoted using the light transport hidden in the recorded per-sample color response to relight virtual objects in visual effects (VFX) look-dev or augmented reality (AR) scenarios. Typically, standard relighting environment exploits global environment maps together with a collection of local light probes to reflect the light mood of the real scene on the virtual object. We propose instead a unified spatial approximation of the radiance and visibility relationships present in the scene, in the form of a colored point cloud. To do so, our method relies on two core components: High Dynamic Range (HDR) expansion and real-time Point-Based Global Illumination (PBGI). First, since an acquired color point cloud typically comes in Low Dynamic Range (LDR) format, we boost it using a single HDR photo exemplar of the captured scene that can cover part of it. We perform this expansion efficiently by first expanding the dynamic range of a set of renderings of the point cloud and then projecting these renderings on the original cloud. At this stage, we propagate the expansion to the regions not covered by the renderings or with low-quality dynamic range by solving a Poisson system. Then, at rendering time, we use the resulting HDR point cloud to relight virtual objects, providing a diffuse model of the indirect illumination propagated by the environment. To do so, we design a PBGI algorithm that exploits the GPU's geometry shader stage as well as a new mipmapping operator, tailored for G-buffers, to achieve real-time performances. As a result, our method can effectively relight virtual objects exhibiting diffuse and glossy physically-based materials in real time. Furthermore, it accounts for the spatial embedding of the object within the 3D environment. We evaluate our approach on manufactured scenes to assess the error introduced at every step from the perfect ground truth. We also report experiments with real captured data, covering a range of capture technologies, from active scanning to multiview stereo reconstruction.
  • Item
    Rain Wiper: An Incremental RandomlyWired Network for Single Image Deraining
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Liang, Xiwen; Qiu, Bin; Su, Zhuo; Gao, Chengying; Shi, Xiaohong; Wang, Ruomei; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Single image rain removal is a challenging ill-posed problem due to various shapes and densities of rain streaks. We present a novel incremental randomly wired network (IRWN) for single image deraining. Different from previous methods, most structures of modules in IRWN are generated by a stochastic network generator based on the random graph theory, which ease the burden of manual design and further help to characterize more complex rain streaks. To decrease network parameters and extract more details efficiently, the image pyramid is fused via the multi-scale network structure. An incremental rectified loss is proposed to better remove rain streaks in different rain conditions and recover the texture information of target objects. Extensive experiments on synthetic and real-world datasets demonstrate that the proposed method outperforms the state-ofthe- art methods significantly. In addition, an ablation study is conducted to illustrate the improvements obtained by different modules and loss items in IRWN.
  • Item
    ExpandNet: A Deep Convolutional Neural Network for High Dynamic Range Expansion from Low Dynamic Range Content
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Marnerides, Demetris; Bashford-Rogers, Thomas; Hatchett, Jon; Debattista, Kurt; Gutierrez, Diego and Sheffer, Alla
    High dynamic range (HDR) imaging provides the capability of handling real world lighting as opposed to the traditional low dynamic range (LDR) which struggles to accurately represent images with higher dynamic range. However, most imaging content is still available only in LDR. This paper presents a method for generating HDR content from LDR content based on deep Convolutional Neural Networks (CNNs) termed ExpandNet. ExpandNet accepts LDR images as input and generates images with an expanded range in an end-to-end fashion. The model attempts to reconstruct missing information that was lost from the original signal due to quantization, clipping, tone mapping or gamma correction. The added information is reconstructed from learned features, as the network is trained in a supervised fashion using a dataset of HDR images. The approach is fully automatic and data driven; it does not require any heuristics or human expertise. ExpandNet uses a multiscale architecture which avoids the use of upsampling layers to improve image quality. The method performs well compared to expansion/inverse tone mapping operators quantitatively on multiple metrics, even for badly exposed inputs.
  • Item
    Denoising Deep Monte Carlo Renderings
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Vicini, D.; Adler, D.; Novák, J.; Rousselle, F.; Burley, B.; Chen, Min and Benes, Bedrich
    We present a novel algorithm to denoise deep Monte Carlo renderings, in which pixels contain multiple colour values, each for a different range of depths. Deep images are a more expressive representation of the scene than conventional flat images. However, since each depth bin receives only a fraction of the flat pixel's samples, denoising the bins is harder due to the less accurate mean and variance estimates. Furthermore, deep images lack a regular structure in depth—the number of depth bins and their depth ranges vary across pixels. This prevents a straightforward application of patch‐based distance metrics frequently used to improve the robustness of existing denoising filters. We address these constraints by combining a flat image‐space non‐local means filter operating on pixel colours with a cross‐bilateral filter operating on auxiliary features (albedo, normal, etc.). Our approach significantly reduces noise in deep images while preserving their structure. To our best knowledge, our algorithm is the first to enable efficient deep‐compositing workflows with denoised Monte Carlo renderings. We demonstrate the performance of our filter on a range of scenes highlighting the challenges and advantages of denoising deep images.
  • Item
    Selecting Texture Resolution Using a Task-specific Visibility Metric
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Wolski, Krzysztof; Giunchi, Daniele; Kinuwaki, Shinichi; Didyk, Piotr; Myszkowski, Karol; Steed, Anthony; Mantiuk, Rafal K.; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    In real-time rendering, the appearance of scenes is greatly affected by the quality and resolution of the textures used for image synthesis. At the same time, the size of textures determines the performance and the memory requirements of rendering. As a result, finding the optimal texture resolution is critical, but also a non-trivial task since the visibility of texture imperfections depends on underlying geometry, illumination, interactions between several texture maps, and viewing positions. Ideally, we would like to automate the task with a visibility metric, which could predict the optimal texture resolution. To maximize the performance of such a metric, it should be trained on a given task. This, however, requires sufficient user data which is often difficult to obtain. To address this problem, we develop a procedure for training an image visibility metric for a specific task while reducing the effort required to collect new data. The procedure involves generating a large dataset using an existing visibility metric followed by refining that dataset with the help of an efficient perceptual experiment. Then, such a refined dataset is used to retune the metric. This way, we augment sparse perceptual data to a large number of per-pixel annotated visibility maps which serve as the training data for application-specific visibility metrics. While our approach is general and can be potentially applied for different image distortions, we demonstrate an application in a game-engine where we optimize the resolution of various textures, such as albedo and normal maps.
  • Item
    Learning Explicit Smoothing Kernels for Joint Image Filtering
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Fang, Xiaonan; Wang, Miao; Shamir, Ariel; Hu, Shi-Min; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Smoothing noises while preserving strong edges in images is an important problem in image processing. Image smoothing filters can be either explicit (based on local weighted average) or implicit (based on global optimization). Implicit methods are usually time-consuming and cannot be applied to joint image filtering tasks, i.e., leveraging the structural information of a guidance image to filter a target image.Previous deep learning based image smoothing filters are all implicit and unavailable for joint filtering. In this paper, we propose to learn explicit guidance feature maps as well as offset maps from the guidance image and smoothing parameter that can be utilized to smooth the input itself or to filter images in other target domains. We design a deep convolutional neural network consisting of a fully-convolution block for guidance and offset maps extraction together with a stacked spatially varying deformable convolution block for joint image filtering. Our models can approximate several representative image smoothing filters with high accuracy comparable to state-of-the-art methods, and serve as general tools for other joint image filtering tasks, such as color interpolation, depth map upsampling, saliency map upsampling, flash/non-flash image denoising and RGB/NIR image denoising.
  • Item
    Bayesian Collaborative Denoising for Monte Carlo Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Boughida, Malik; Boubekeur, Tamy; Zwicker, Matthias and Sander, Pedro
    The stochastic nature of Monte Carlo rendering algorithms inherently produces noisy images. Essentially, three approaches have been developed to solve this issue: improving the ray-tracing strategies to reduce pixel variance, providing adaptive sampling by increasing the number of rays in regions needing so, and filtering the noisy image as a post-process. Although the algorithms from the latter category introduce bias, they remain highly attractive as they quickly improve the visual quality of the images, are compatible with all sorts of rendering effects, have a low computational cost and, for some of them, avoid deep modifications of the rendering engine. In this paper, we build upon recent advances in both non-local and collaborative filtering methods to propose a new efficient denoising operator for Monte Carlo rendering. Starting from the local statistics which emanate from the pixels sample distribution, we enrich the image with local covariance measures and introduce a nonlocal bayesian filter which is specifically designed to address the noise stemming from Monte Carlo rendering. The resulting algorithm only requires the rendering engine to provide for each pixel a histogram and a covariance matrix of its color samples. Compared to state-of-the-art sample-based methods, we obtain improved denoising results, especially in dark areas, with a large increase in speed and more robustness with respect to the main parameter of the algorithm. We provide a detailed mathematical exposition of our bayesian approach, discuss extensions to multiscale execution, adaptive sampling and animated scenes, and experimentally validate it on a collection of scenes.
  • Item
    Naturalness-Preserving Image Tone Enhancement Using Generative Adversarial Networks
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Son, Hyeongseok; Lee, Gunhee; Cho, Sunghyun; Lee, Seungyong; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    This paper proposes a deep learning-based image tone enhancement approach that can maximally enhance the tone of an image while preserving the naturalness. Our approach does not require carefully generated ground-truth images by human experts for training. Instead, we train a deep neural network to mimic the behavior of a previous classical filtering method that produces drastic but possibly unnatural-looking tone enhancement results. To preserve the naturalness, we adopt the generative adversarial network (GAN) framework as a regularizer for the naturalness. To suppress artifacts caused by the generative nature of the GAN framework, we also propose an imbalanced cycle-consistency loss. Experimental results show that our approach can effectively enhance the tone and contrast of an image while preserving the naturalness compared to previous state-of-the-art approaches.
  • Item
    Terrain Super-resolution through Aerial Imagery and Fully Convolutional Networks
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Argudo, Oscar; Chica, Antonio; Andujar, Carlos; Gutierrez, Diego and Sheffer, Alla
    Despite recent advances in surveying techniques, publicly available Digital Elevation Models (DEMs) of terrains are lowresolution except for selected places on Earth. In this paper we present a new method to turn low-resolution DEMs into plausible and faithful high-resolution terrains. Unlike other approaches for terrain synthesis/amplification (fractal noise, hydraulic and thermal erosion, multi-resolution dictionaries), we benefit from high-resolution aerial images to produce highly-detailed DEMs mimicking the features of the real terrain. We explore different architectures for Fully Convolutional Neural Networks to learn upsampling patterns for DEMs from detailed training sets (high-resolution DEMs and orthophotos), yielding up to one order of magnitude more resolution. Our comparative results show that our method outperforms competing data amplification approaches in terms of elevation accuracy and terrain plausibility.