Rendering 2019 - DL-only / Industry Track

Permanent URI for this collection

Strasbourg, France | July 10-12, 2019
(Rendering 2019 CGF papers are available here.)
High Performance Rendering
Real-Time Hybrid Hair Rendering
Erik Sven Vasconcelos Jansson, Matthäus G. Chajdas, Jason Lacroix, and Ingemar Ragnemalm
Spectral Effects
Spectral Primary Decomposition for Rendering with sRGB Reflectance
Ian Mallett and Cem Yuksel
Light Transport
Adaptive Multi-view Path Tracing
Basile Fraboni, Jean-Claude Iehl, Vincent Nivoliers, and Guillaume Bouchard
Interactive and Real-time Rendering
Impulse Responses for Precomputing Light from Volumetric Media
Adrien Dubouchet, Peter-Pike Sloan, Wojciech Jarosz, and Derek Nowrouzezahrai
Foveated Real-Time Path Tracing in Visual-Polar Space
Matias Koskela, Atro Lotvonen, Markku Mäkitalo, Petrus Kivi, Timo Viitanen, and Pekka Jääskeläinen
Deep Learning
Puppet Dubbing
Ohad Fried and Maneesh Agrawala
Industry Track
Implementing One-Click Caustics in Corona Renderer
Martin Šik and Jaroslav Krivánek
De-lighting a High-resolution Picture for Material Acquisition
Rosalie Martin, Arthur Meyer, and Davide Pesare
The Challenges of Releasing the Moana Island Scene
Rasmus Tamstorf and Heather Pritchett

BibTeX (Rendering 2019 - DL-only / Industry Track)
@inproceedings{
10.2312:sr.20191215,
booktitle = {
Eurographics Symposium on Rendering - DL-only and Industry Track},
editor = {
Boubekeur, Tamy and Sen, Pradeep
}, title = {{
Real-Time Hybrid Hair Rendering}},
author = {
Jansson, Erik Sven Vasconcelos
 and
Chajdas, Matthäus G.
 and
Lacroix, Jason
 and
Ragnemalm, Ingemar
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-095-6},
DOI = {
10.2312/sr.20191215}
}
@inproceedings{
10.2312:sr.20191216,
booktitle = {
Eurographics Symposium on Rendering - DL-only and Industry Track},
editor = {
Boubekeur, Tamy and Sen, Pradeep
}, title = {{
Spectral Primary Decomposition for Rendering with sRGB Reflectance}},
author = {
Mallett, Ian
 and
Yuksel, Cem
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-095-6},
DOI = {
10.2312/sr.20191216}
}
@inproceedings{
10.2312:sr.20191217,
booktitle = {
Eurographics Symposium on Rendering - DL-only and Industry Track},
editor = {
Boubekeur, Tamy and Sen, Pradeep
}, title = {{
Adaptive Multi-view Path Tracing}},
author = {
Fraboni, Basile
 and
Iehl, Jean-Claude
 and
Nivoliers, Vincent
 and
Bouchard, Guillaume
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-095-6},
DOI = {
10.2312/sr.20191217}
}
@inproceedings{
10.2312:sr.20191218,
booktitle = {
Eurographics Symposium on Rendering - DL-only and Industry Track},
editor = {
Boubekeur, Tamy and Sen, Pradeep
}, title = {{
Impulse Responses for Precomputing Light from Volumetric Media}},
author = {
Dubouchet, Adrien
 and
Sloan, Peter-Pike
 and
Jarosz, Wojciech
 and
Nowrouzezahrai, Derek
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-095-6},
DOI = {
10.2312/sr.20191218}
}
@inproceedings{
10.2312:sr.20191219,
booktitle = {
Eurographics Symposium on Rendering - DL-only and Industry Track},
editor = {
Boubekeur, Tamy and Sen, Pradeep
}, title = {{
Foveated Real-Time Path Tracing in Visual-Polar Space}},
author = {
Koskela, Matias
 and
Lotvonen, Atro
 and
Mäkitalo, Markku
 and
Kivi, Petrus
 and
Viitanen, Timo
 and
Jääskeläinen, Pekka
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-095-6},
DOI = {
10.2312/sr.20191219}
}
@inproceedings{
10.2312:sr.20191220,
booktitle = {
Eurographics Symposium on Rendering - DL-only and Industry Track},
editor = {
Boubekeur, Tamy and Sen, Pradeep
}, title = {{
Puppet Dubbing}},
author = {
Fried, Ohad
 and
Agrawala, Maneesh
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-095-6},
DOI = {
10.2312/sr.20191220}
}
@inproceedings{
10.2312:sr.20191221,
booktitle = {
Eurographics Symposium on Rendering - DL-only and Industry Track},
editor = {
Boubekeur, Tamy and Sen, Pradeep
}, title = {{
Implementing One-Click Caustics in Corona Renderer}},
author = {
Šik, Martin
 and
Krivánek, Jaroslav
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-095-6},
DOI = {
10.2312/sr.20191221}
}
@inproceedings{
10.2312:sr.20191222,
booktitle = {
Eurographics Symposium on Rendering - DL-only and Industry Track},
editor = {
Boubekeur, Tamy and Sen, Pradeep
}, title = {{
De-lighting a High-resolution Picture for Material Acquisition}},
author = {
Martin, Rosalie
 and
Meyer, Arthur
 and
Pesare, Davide
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-095-6},
DOI = {
10.2312/sr.20191222}
}
@inproceedings{
10.2312:sr.20191223,
booktitle = {
Eurographics Symposium on Rendering - DL-only and Industry Track},
editor = {
Boubekeur, Tamy and Sen, Pradeep
}, title = {{
The Challenges of Releasing the Moana Island Scene}},
author = {
Tamstorf, Rasmus
 and
Pritchett, Heather
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-095-6},
DOI = {
10.2312/sr.20191223}
}

Browse

Recent Submissions

Now showing 1 - 10 of 10
  • Item
    Eurographics Symposium on Rendering 2019 – DL-only / Industry Track: Frontmatter
    (Eurographics Association, 2019) Boubekeur, Tamy; Sen, Pradeep; Boubekeur, Tamy and Sen, Pradeep
  • Item
    Real-Time Hybrid Hair Rendering
    (The Eurographics Association, 2019) Jansson, Erik Sven Vasconcelos; Chajdas, Matthäus G.; Lacroix, Jason; Ragnemalm, Ingemar; Boubekeur, Tamy and Sen, Pradeep
    Rendering hair is a challenging problem for real-time applications. Besides complex shading, the sheer amount of it poses a lot of problems, as a human scalp can have over 100,000 strands of hair, with animal fur often surpassing a million. For rendering, both strand-based and volume-based techniques have been used, but usually in isolation. In this work, we present a complete hair rendering solution based on a hybrid approach. The solution requires no pre-processing, making it a drop-in replacement, that combines the best of strand-based and volume-based rendering. Our approach uses this volume not only as a level-of-detail representation that is raymarched directly, but also to simulate global effects, like shadows and ambient occlusion in real-time.
  • Item
    Spectral Primary Decomposition for Rendering with sRGB Reflectance
    (The Eurographics Association, 2019) Mallett, Ian; Yuksel, Cem; Boubekeur, Tamy and Sen, Pradeep
    Spectral renderers, as-compared to RGB renderers, are able to simulate light transport that is closer to reality, capturing light behavior that is impossible to simulate with any three-primary decomposition. However, spectral rendering requires spectral scene data (e.g. textures and material properties), which is not widely available, severely limiting the practicality of spectral rendering. Unfortunately, producing a physically valid reflectance spectrum from a given sRGB triple has been a challenging problem, and indeed until very recently constructing a spectrum without colorimetric round-trip error was thought to be impossible. In this paper, we introduce a new procedure for efficiently generating a reflectance spectrum from any given sRGB input data. We show for the first time that it is possible to create any sRGB reflectance spectrum as a linear combination of three separate spectra, each directly corresponding to one of the BT.709 primaries. Our approach produces consistent results, such that the input sRGB value is perfectly reproduced by the corresponding reflectance spectrum under D65 illumination, bounded only by Monte Carlo and numerical error. We provide a complete implementation, including a precomputed spectral basis, and discuss important optimizations and generalization to other RGB spaces.
  • Item
    Adaptive Multi-view Path Tracing
    (The Eurographics Association, 2019) Fraboni, Basile; Iehl, Jean-Claude; Nivoliers, Vincent; Bouchard, Guillaume; Boubekeur, Tamy and Sen, Pradeep
    Rendering photo-realistic image sequences using path tracing and Monte Carlo integration often requires sampling a large number of paths to get converged results. In the context of rendering multiple views or animated sequences, such sampling can be highly redundant. Several methods have been developed to share sampled paths between spatially or temporarily similar views. However, such sharing is challenging since it can lead to bias in the final images. Our contribution is a Monte Carlo sampling technique which generates paths, taking into account several cameras. First, we sample the scene from all the cameras to generate hit points. Then, an importance sampling technique generates bouncing directions which are shared by a subset of cameras. This set of hit points and bouncing directions is then used within a regular path tracing solution. For animated scenes, paths remain valid for a fixed time only, but sharing can still occur between cameras as long as their exposure time intervals overlap. We show that our technique generates less noise than regular path tracing and does not introduce noticeable bias.
  • Item
    Impulse Responses for Precomputing Light from Volumetric Media
    (The Eurographics Association, 2019) Dubouchet, Adrien; Sloan, Peter-Pike; Jarosz, Wojciech; Nowrouzezahrai, Derek; Boubekeur, Tamy and Sen, Pradeep
    Modern interactive rendering can rely heavily on precomputed static lighting on surfaces and in volumes. Scattering from volumetric media can be similarly treated using precomputation, but transport from volumes onto surfaces is typically ignored here. We propose a compact, efficient method to simulate volume-to-surface transport during lighting precomputation . We leverage a novel model of the spherical impulse response of light scattered (and attenuated) in volumetric media to simulate light transport from volumes onto surfaces with simple precomputed lookup tables. These tables model the impulse response as a function of distance and angle to the light and surfaces. We then remap the impulse responses to media with arbitrary, potentially heterogeneous scattering parameters and various phase functions. Moreover, we can compose our impulse response model to treat multiple scattering events in the volume (arriving at surfaces). We apply our method to precomputed volume-to-surface light transport in complex scenes, generating results indistinguishable from ground truth simulations. Our tables allow us to precompute volume-to-surface transport orders of magnitude faster than even an optimized path tracing-based solution would.
  • Item
    Foveated Real-Time Path Tracing in Visual-Polar Space
    (The Eurographics Association, 2019) Koskela, Matias; Lotvonen, Atro; Mäkitalo, Markku; Kivi, Petrus; Viitanen, Timo; Jääskeläinen, Pekka; Boubekeur, Tamy and Sen, Pradeep
    Computing power is still the limiting factor in photorealistic real-time rendering. Foveated rendering improves perceived quality by focusing the rendering effort on where the user is looking at. Applying foveated rendering to real-time path tracing where we must work on a very small number of samples per pixel introduces additional challenges; the rendering result is thoroughly noisy and sparse in the periphery. In this paper we demonstrate foveated real-time path tracing system and propose a novel Visual-Polar space in which both real-time path tracing and denoising is done before mapping to screen space. When path tracing a regular grid of samples in Visual-Polar space, the screen space sample distribution follows the human visual acuity model, making both the rendering and denoising 2:5x faster with similar perceived quality. In addition, when using Visual- Polar space, primary rays stay more coherent, leading to improved utilization of the GPU resources and, therefore, making ray traversal 1.3 - 1.5x faster. Moreover, Visual-Polar space improves 1 sample per pixel denoising quality in the fovea. We show that Visual-Polar based path tracing enables real-time rendering for contemporary virtual reality devices even without dedicated ray tracing hardware acceleration.
  • Item
    Puppet Dubbing
    (The Eurographics Association, 2019) Fried, Ohad; Agrawala, Maneesh; Boubekeur, Tamy and Sen, Pradeep
    Dubbing puppet videos to make the characters (e.g. Kermit the Frog) convincingly speak a new speech track is a popular activity with many examples of well-known puppets speaking lines from films or singing rap songs. But manually aligning puppet mouth movements to match a new speech track is tedious as each syllable of the speech must match a closed-open-closed segment of mouth movement for the dub to be convincing. In this work, we present two methods to align a new speech track with puppet video, one semi-automatic appearance-based and the other fully-automatic audio-based. The methods offer complementary advantages and disadvantages. Our appearance-based approach directly identifies closed-open-closed segments in the puppet video and is robust to low-quality audio as well as misalignments between the mouth movements and speech in the original performance, but requires some manual annotation. Our audio-based approach assumes the original performance matches a closed-open-closed mouth segment to each syllable of the original speech. It is fully automatic, robust to visual occlusions and fast puppet movements, but does not handle misalignments in the original performance. We compare the methods and show that both improve the credibility of the resulting video over simple baseline techniques, via quantitative evaluation and user ratings.
  • Item
    Implementing One-Click Caustics in Corona Renderer
    (The Eurographics Association, 2019) Šik, Martin; Krivánek, Jaroslav; Boubekeur, Tamy and Sen, Pradeep
    This paper describes the implementation of a fully automatic caustics rendering solution in Corona Renderer. The main requirement is that the technique be completely transparent to the user, should not need any parameter setting at all, and be fully integrated into the interactive and progressive rendering workflow. We base our approach on an efficient subset of the vertex connection and merging algorithm, specifically a multiple importance sampling combination of path tracing and photon mapping. We rely on Metropolis sampling to guide photon paths into the relevant parts of the scene. While these underlying ideas have appeared in existing research work, numerous previously unaddressed issues and edge cases arise when one applies these ideas in practice. These include unreliable convergence of the Metropolis sampler in scenes with many light sources of different sizes and intensities, the ''caustic in a stadium'' problem (i.e., efficient rendering of small caustics in extremely large scenes), etc. We present the solutions we have developed to address such issues, yielding what we call ''one-click caustics rendering''. User feedback suggests that our approach substantially improves usability over methods previously implemented in comercially available software, all requiring the user to set various technical parameters.
  • Item
    De-lighting a High-resolution Picture for Material Acquisition
    (The Eurographics Association, 2019) Martin, Rosalie; Meyer, Arthur; Pesare, Davide; Boubekeur, Tamy and Sen, Pradeep
    We propose a deep-learning based method for the removal of shades, projected shadows and highlights from a single picture of a quasi-planar surface captured in natural lighting conditions with any kind of camera device. To achieve this, we train an encoder-decoder to process physically based materials, rendered under various lighting conditions, to infer its spatially-varying albedo. Our network processes relatively small image tiles (512x512 pixels) and we propose a solution to handle larger image resolutions by solving a Poisson system across these tiles.
  • Item
    The Challenges of Releasing the Moana Island Scene
    (The Eurographics Association, 2019) Tamstorf, Rasmus; Pritchett, Heather; Boubekeur, Tamy and Sen, Pradeep
    A tremendous amount of research has been done over the years using the Stanford bunny, the Cornell box and recently somewhat more complicated data sets. Yet, none of these data sets come close to representing the complexity that production houses and film studios handle on a daily basis. In recent years industry members have lamented this lack of realistic examples, and in return academics have requested that more representative examples be made available. Both of these points are valid, which in turn has led to the release of the Moana Island Scene dataset. However, while it sounds simple, the actual release of such data leads to numerous philosophical and practical questions. The goal of this paper is to present some of the challenges associated with releasing production data for academic use.