37-Issue 4

Permanent URI for this collection

Karlsruhe, Germany, 1 - 4 July 2018
(Rendering - Experimental Ideas & Implementations 2018 are available here.)
Acquisition
Acquisition and Validation of Spectral Ground Truth Data for Predictive Rendering of Rough Surfaces
Olaf Clausen, Ricardo Marroquim, and Arnulph Fuhrmann
Sampling
Stratified Sampling of Projected Spherical Caps
Carlos Ureña and Iliyan Georgiev
Progressive Multi-Jittered Sample Sequences
Per Christensen, Andrew Kensler, and Charlie Kilpatrick
Deep Adaptive Sampling for Low Sample Count Rendering
Alexandr Kuznetsov, Nima Khademi Kalantari, and Ravi Ramamoorthi
Rendering Techniques I
Spectral Gradient Sampling for Path Tracing
Victor Petitjean, Pablo Bauszat, and Elmar Eisemann
Materials
A Composite BRDF Model for Hazy Gloss
Pascal Barla, Romain Pacanowski, and Peter Vangorp
A Physically-based Appearance Model for Special Effect Pigments
Jie Guo, Yanjun Chen, Yanwen Guo, and Jingui Pan
Handling Fluorescence in a Uni-directional Spectral Path Tracer
Michal Mojzík, Alban Fichet, and Alexander Wilkie
Image-based Techniques
Deep Painting Harmonization
Fujun Luan, Sylvain Paris, Eli Shechtman, and Kavita Bala
Thin Structures in Image Based Rendering
Theo Thonat, Abdelaziz Djelouah, Fredo Durand, and George Drettakis
Exploiting Repetitions for Image-Based Rendering of Facades
Simon Rodriguez, Adrien Bousseau, Fredo Durand, and George Drettakis
Rendering Techniques II
Efficient Caustic Rendering with Lightweight Photon Mapping
Pascal Grittmann, Arsène Pérard-Gayot, Philipp Slusallek, and Jaroslav Křivánek
Real-time Rendering
Runtime Shader Simplification via Instant Search in Reduced Optimization Space
Yazhen Yuan, Rui Wang, Tianlei Hu, and Hujun Bao
On-the-Fly Power-Aware Rendering
Yunjin Zhang, Marta Ortín, Victor Arellano, Rui Wang, Diego Gutierrez, and Hujun Bao
Screen-space Methods
Quad-Based Fourier Transform for Efficient Diffraction Synthesis
Leonardo Scandolo, Sungkil Lee, and Elmar Eisemann

BibTeX (37-Issue 4)
                
@article{
10.1111:cgf.13470,
journal = {Computer Graphics Forum}, title = {{
Acquisition and Validation of Spectral Ground Truth Data for Predictive Rendering of Rough Surfaces}},
author = {
Clausen, Olaf
and
Marroquim, Ricardo
and
Fuhrmann, Arnulph
}, year = {
2018},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13470}
}
                
@article{
10.1111:cgf.13471,
journal = {Computer Graphics Forum}, title = {{
Stratified Sampling of Projected Spherical Caps}},
author = {
Ureña, Carlos
and
Georgiev, Iliyan
}, year = {
2018},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13471}
}
                
@article{
10.1111:cgf.13472,
journal = {Computer Graphics Forum}, title = {{
Progressive Multi-Jittered Sample Sequences}},
author = {
Christensen, Per
and
Kensler, Andrew
and
Kilpatrick, Charlie
}, year = {
2018},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13472}
}
                
@article{
10.1111:cgf.13473,
journal = {Computer Graphics Forum}, title = {{
Deep Adaptive Sampling for Low Sample Count Rendering}},
author = {
Kuznetsov, Alexandr
and
Kalantari, Nima Khademi
and
Ramamoorthi, Ravi
}, year = {
2018},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13473}
}
                
@article{
10.1111:cgf.13474,
journal = {Computer Graphics Forum}, title = {{
Spectral Gradient Sampling for Path Tracing}},
author = {
Petitjean, Victor
and
Bauszat, Pablo
and
Eisemann, Elmar
}, year = {
2018},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13474}
}
                
@article{
10.1111:cgf.13475,
journal = {Computer Graphics Forum}, title = {{
A Composite BRDF Model for Hazy Gloss}},
author = {
Barla, Pascal
and
Pacanowski, Romain
and
Vangorp, Peter
}, year = {
2018},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13475}
}
                
@article{
10.1111:cgf.13476,
journal = {Computer Graphics Forum}, title = {{
A Physically-based Appearance Model for Special Effect Pigments}},
author = {
Guo, Jie
and
Chen, Yanjun
and
Guo, Yanwen
and
Pan, Jingui
}, year = {
2018},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13476}
}
                
@article{
10.1111:cgf.13477,
journal = {Computer Graphics Forum}, title = {{
Handling Fluorescence in a Uni-directional Spectral Path Tracer}},
author = {
Mojzík, Michal
and
Fichet, Alban
and
Wilkie, Alexander
}, year = {
2018},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13477}
}
                
@article{
10.1111:cgf.13478,
journal = {Computer Graphics Forum}, title = {{
Deep Painting Harmonization}},
author = {
Luan, Fujun
and
Paris, Sylvain
and
Shechtman, Eli
and
Bala, Kavita
}, year = {
2018},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13478}
}
                
@article{
10.1111:cgf.13479,
journal = {Computer Graphics Forum}, title = {{
Thin Structures in Image Based Rendering}},
author = {
Thonat, Theo
and
Djelouah, Abdelaziz
and
Durand, Fredo
and
Drettakis, George
}, year = {
2018},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13479}
}
                
@article{
10.1111:cgf.13480,
journal = {Computer Graphics Forum}, title = {{
Exploiting Repetitions for Image-Based Rendering of Facades}},
author = {
Rodriguez, Simon
and
Bousseau, Adrien
and
Durand, Fredo
and
Drettakis, George
}, year = {
2018},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13480}
}
                
@article{
10.1111:cgf.13481,
journal = {Computer Graphics Forum}, title = {{
Efficient Caustic Rendering with Lightweight Photon Mapping}},
author = {
Grittmann, Pascal
and
Pérard-Gayot, Arsène
and
Slusallek, Philipp
and
Křivánek, Jaroslav
}, year = {
2018},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13481}
}
                
@article{
10.1111:cgf.13482,
journal = {Computer Graphics Forum}, title = {{
Runtime Shader Simplification via Instant Search in Reduced Optimization Space}},
author = {
Yuan, Yazhen
and
Wang, Rui
and
Hu, Tianlei
and
Bao, Hujun
}, year = {
2018},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13482}
}
                
@article{
10.1111:cgf.13483,
journal = {Computer Graphics Forum}, title = {{
On-the-Fly Power-Aware Rendering}},
author = {
Zhang, Yunjin
and
Ortín, Marta
and
Arellano, Victor
and
Wang, Rui
and
Gutierrez, Diego
and
Bao, Hujun
}, year = {
2018},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13483}
}
                
@article{
10.1111:cgf.13484,
journal = {Computer Graphics Forum}, title = {{
Quad-Based Fourier Transform for Efficient Diffraction Synthesis}},
author = {
Scandolo, Leonardo
and
Lee, Sungkil
and
Eisemann, Elmar
}, year = {
2018},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13484}
}

Browse

Recent Submissions

Now showing 1 - 16 of 16
  • Item
    Eurographics Symposium on Rendering: Frontmatter
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Jakob, Wenzel; Hachisuka, Toshiya; Jakob, Wenzel and Hachisuka, Toshiya
  • Item
    Acquisition and Validation of Spectral Ground Truth Data for Predictive Rendering of Rough Surfaces
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Clausen, Olaf; Marroquim, Ricardo; Fuhrmann, Arnulph; Jakob, Wenzel and Hachisuka, Toshiya
    Physically based rendering uses principles of physics to model the interaction of light with matter. Even though it is possible to achieve photorealistic renderings, it often fails to be predictive. There are two major issues: first, there is no analytic material model that considers all appearance critical characteristics; second, light is in many cases described by only 3 RGB-samples. This leads to the problem that there are different models for different material types and that wavelength dependent phenomena are only approximated. In order to be able to analyze the influence of both problems on the appearance of real world materials, an accurate comparison between rendering and reality is necessary. Therefore, in this work, we acquired a set of precisely and spectrally resolved ground truth data. It consists of the precise description of a new developed reference scene including isotropic BRDFs of 24 color patches, as well as the reference measurements of all patches under 13 different angles inside the reference scene. Our reference data covers rough materials with many different spectral distributions and various illumination situations, from direct light to indirect light dominated situations.
  • Item
    Stratified Sampling of Projected Spherical Caps
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Ureña, Carlos; Georgiev, Iliyan; Jakob, Wenzel and Hachisuka, Toshiya
    We present a method for uniformly sampling points inside the projection of a spherical cap onto a plane through the sphere's center. To achieve this, we devise two novel area-preserving mappings from the unit square to this projection, which is often an ellipse but generally has a more complex shape. Our maps allow for low-variance rendering of direct illumination from finite and infinite (e.g. sun-like) spherical light sources by sampling their projected solid angle in a stratified manner. We discuss the practical implementation of our maps and show significant quality improvement over traditional uniform spherical cap sampling in a production renderer.
  • Item
    Progressive Multi-Jittered Sample Sequences
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Christensen, Per; Kensler, Andrew; Kilpatrick, Charlie; Jakob, Wenzel and Hachisuka, Toshiya
    We introduce three new families of stochastic algorithms to generate progressive 2D sample point sequences. This opens a general framework that researchers and practitioners may find useful when developing future sample sequences. Our best sequences have the same low sampling error as the best known sequence (a particular randomization of the Sobol' (0,2) sequence). The sample points are generated using a simple, diagonally alternating strategy that progressively fills in holes in increasingly fine stratifications. The sequences are progressive (hierarchical): any prefix is well distributed, making them suitable for incremental rendering and adaptive sampling. The first sample family is only jittered in 2D; we call it progressive jittered. It is nearly identical to existing sample sequences. The second family is multi-jittered: the samples are stratified in both 1D and 2D; we call it progressive multi-jittered. The third family is stratified in all elementary intervals in base 2, hence we call it progressive multi-jittered (0,2). We compare sampling error and convergence of our sequences with uniform random, best candidates, randomized quasi-random sequences (Halton and Sobol'), Ahmed's ART sequences, and Perrier's LDBN sequences. We test the sequences on function integration and in two settings that are typical for computer graphics: pixel sampling and area light sampling. Within this new framework we present variations that generate visually pleasing samples with blue noise spectra, and well-stratified interleaved multi-class samples; we also suggest possible future variations.
  • Item
    Deep Adaptive Sampling for Low Sample Count Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Kuznetsov, Alexandr; Kalantari, Nima Khademi; Ramamoorthi, Ravi; Jakob, Wenzel and Hachisuka, Toshiya
    Recently, deep learning approaches have proven successful at removing noise from Monte Carlo (MC) rendered images at extremely low sampling rates, e.g., 1-4 samples per pixel (spp). While these methods provide dramatic speedups, they operate on uniformly sampled MC rendered images. However, the full promise of low sample counts requires both adaptive sampling and reconstruction/denoising. Unfortunately, the traditional adaptive sampling techniques fail to handle the cases with low sampling rates, since there is insufficient information to reliably calculate their required features, such as variance and contrast. In this paper, we address this issue by proposing a deep learning approach for joint adaptive sampling and reconstruction of MC rendered images with extremely low sample counts. Our system consists of two convolutional neural networks (CNN), responsible for estimating the sampling map and denoising, separated by a renderer. Specifically, we first render a scene with one spp and then use the first CNN to estimate a sampling map, which is used to distribute three additional samples per pixel on average adaptively. We then filter the resulting render with the second CNN to produce the final denoised image. We train both networks by minimizing the error between the denoised and ground truth images on a set of training scenes. To use backpropagation for training both networks, we propose an approach to effectively compute the gradient of the renderer. We demonstrate that our approach produces better results compared to other sampling techniques. On average, our 4 spp renders are comparable to 6 spp from uniform sampling with deep learning-based denoising. Therefore, 50% more uniformly distributed samples are required to achieve equal quality without adaptive sampling.
  • Item
    Spectral Gradient Sampling for Path Tracing
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Petitjean, Victor; Bauszat, Pablo; Eisemann, Elmar; Jakob, Wenzel and Hachisuka, Toshiya
    Spectral Monte-Carlo methods are currently the most powerful techniques for simulating light transport with wavelengthdependent phenomena (e.g., dispersion, colored particle scattering, or diffraction gratings). Compared to trichromatic rendering, sampling the spectral domain requires significantly more samples for noise-free images. Inspired by gradient-domain rendering, which estimates image gradients, we propose spectral gradient sampling to estimate the gradients of the spectral distribution inside a pixel. These gradients can be sampled with a significantly lower variance by carefully correlating the path samples of a pixel in the spectral domain, and we introduce a mapping function that shifts paths with wavelength-dependent interactions. We compute the result of each pixel by integrating the estimated gradients over the spectral domain using a onedimensional screened Poisson reconstruction. Our method improves convergence and reduces chromatic noise from spectral sampling, as demonstrated by our implementation within a conventional path tracer.
  • Item
    A Composite BRDF Model for Hazy Gloss
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Barla, Pascal; Pacanowski, Romain; Vangorp, Peter; Jakob, Wenzel and Hachisuka, Toshiya
    We introduce a bidirectional reflectance distribution function (BRDF) model for the rendering of materials that exhibit hazy reflections, whereby the specular reflections appear to be flanked by a surrounding halo. The focus of this work is on artistic control and ease of implementation for real-time and off-line rendering. We propose relying on a composite material based on a pair of arbitrary BRDF models; however, instead of controlling their physical parameters, we expose perceptual parameters inspired by visual experiments [VBF17]. Our main contribution then consists in a mapping from perceptual to physical parameters that ensures the resulting composite BRDF is valid in terms of reciprocity, positivity and energy conservation. The immediate benefit of our approach is to provide direct artistic control over both the intensity and extent of the haze effect, which is not only necessary for editing purposes, but also essential to vary haziness spatially over an object surface. Our solution is also simple to implement as it requires no new importance sampling strategy and relies on existing BRDF models. Such a simplicity is key to approximating the method for the editing of hazy gloss in real-time and for compositing.
  • Item
    A Physically-based Appearance Model for Special Effect Pigments
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Guo, Jie; Chen, Yanjun; Guo, Yanwen; Pan, Jingui; Jakob, Wenzel and Hachisuka, Toshiya
    An appearance model for materials adhered with massive collections of special effect pigments has to take both high-frequency spatial details (e.g., glints) and wave-optical effects (e.g., iridescence) due to thin-film interference into account. However, either phenomenon is challenging to characterize and simulate in a physically accurate way. Capturing these fascinating effects in a unified framework is even harder as the normal distribution function and the reflectance term are highly correlated and cannot be treated separately. In this paper, we propose a multi-scale BRDF model for reproducing the main visual effects generated by the discrete assembly of special effect pigments, enabling a smooth transition from fine-scale surface details to large-scale iridescent patterns. We demonstrate that the wavelength-dependent reflectance inside the pixel's footprint follows a Gaussian distribution according to the central limit theorem, and is closely related to the distribution of the thin-film's thickness. We efficiently determine the mean and the variance of this Gaussian distribution for each pixel whose closed-form expressions can be derived by assuming that the thin-film's thickness is uniformly distributed. To validate its effectiveness, the proposed model is compared against some previous methods and photographs of actual materials. Furthermore, since our method does not require any scene-dependent precomputation, the distribution of thickness is allowed to be spatially-varying.
  • Item
    Handling Fluorescence in a Uni-directional Spectral Path Tracer
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Mojzík, Michal; Fichet, Alban; Wilkie, Alexander; Jakob, Wenzel and Hachisuka, Toshiya
    We present two separate improvements to the handling of fluorescence effects in modern uni-directional spectral rendering systems. The first is the formulation of a new distance tracking scheme for fluorescent volume materials which exhibit a pronounced wavelength asymmetry. Such volumetric materials are an important and not uncommon corner case of wavelength-shifting media behaviour, and have not been addressed so far in rendering literature. The second one is that we introduce an extension of Hero wavelength sampling which can handle fluorescence events, both on surfaces, and in volumes. Both improvements are useful by themselves, and can be used separately: when used together, they enable the robust inclusion of arbitrary fluorescence effects in modern uni-directional spectral MIS path tracers. Our extension of Hero wavelength sampling is generally useful, while our proposed technique for distance tracking in strongly asymmetric media is admittedly not very efficient. However, it makes the most of a rather difficult situation, and at least allows the inclusion of such media in uni-directional path tracers, albeit at comparatively high cost. Which is still an improvement since up to now, their inclusion was not really possible at all, due to the inability of conventional tracking schemes to generate sampling points in such volume materials.
  • Item
    Deep Painting Harmonization
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Luan, Fujun; Paris, Sylvain; Shechtman, Eli; Bala, Kavita; Jakob, Wenzel and Hachisuka, Toshiya
    Copying an element from a photo and pasting it into a painting is a challenging task. Applying photo compositing techniques in this context yields subpar results that look like a collage - and existing painterly stylization algorithms, which are global, perform poorly when applied locally. We address these issues with a dedicated algorithm that carefully determines the local statistics to be transferred. We ensure both spatial and inter-scale statistical consistency and demonstrate that both aspects are key to generating quality results. To cope with the diversity of abstraction levels and types of paintings, we introduce a technique to adjust the parameters of the transfer depending on the painting. We show that our algorithm produces significantly better results than photo compositing or global stylization techniques and that it enables creative painterly edits that would be otherwise difficult to achieve.
  • Item
    Thin Structures in Image Based Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Thonat, Theo; Djelouah, Abdelaziz; Durand, Fredo; Drettakis, George; Jakob, Wenzel and Hachisuka, Toshiya
    We propose a novel method to handle thin structures in Image-Based Rendering (IBR), and specifically structures supported by simple geometric shapes such as planes, cylinders, etc. These structures, e.g. railings, fences, oven grills etc, are present in many man-made environments and are extremely challenging for multi-view 3D reconstruction, representing a major limitation of existing IBR methods. Our key insight is to exploit multi-view information. After a handful of user clicks to specify the supporting geometry, we compute multi-view and multi-layer alpha mattes to extract the thin structures. We use two multi-view terms in a graph-cut segmentation, the first based on multi-view foreground color prediction and the second ensuring multiview consistency of labels. Occlusion of the background can challenge reprojection error calculation and we use multiview median images and variance, with multiple layers of thin structures. Our end-to-end solution uses the multi-layer segmentation to create per-view mattes and the median colors and variance to create a clean background. We introduce a new multi-pass IBR algorithm based on depth-peeling to allow free-viewpoint navigation of multi-layer semi-transparent thin structures. Our results show significant improvement in rendering quality for thin structures compared to previous image-based rendering solutions.
  • Item
    Exploiting Repetitions for Image-Based Rendering of Facades
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Rodriguez, Simon; Bousseau, Adrien; Durand, Fredo; Drettakis, George; Jakob, Wenzel and Hachisuka, Toshiya
    Street-level imagery is now abundant but does not have sufficient capture density to be usable for Image-Based Rendering (IBR) of facades. We present a method that exploits repetitive elements in facades - such as windows - to perform data augmentation, in turn improving camera calibration, reconstructed geometry and overall rendering quality for IBR. The main intuition behind our approach is that a few views of several instances of an element provide similar information to many views of a single instance of that element. We first select similar instances of an element from 3-4 views of a facade and transform them into a common coordinate system, creating a ''platonic'' element. We use this common space to refine the camera calibration of each view of each instance and to reconstruct a 3D mesh of the element with multi-view stereo, that we regularize to obtain a piecewise-planar mesh aligned with dominant image contours. Observing the same element under multiple views also allows us to identify reflective areas - such as glass panels - which we use at rendering time to generate plausible reflections using an environment map. Our detailed 3D mesh, augmented set of views, and reflection mask enable image-based rendering of much higher quality than results obtained using the input images directly.
  • Item
    Efficient Caustic Rendering with Lightweight Photon Mapping
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Grittmann, Pascal; Pérard-Gayot, Arsène; Slusallek, Philipp; Křivánek, Jaroslav; Jakob, Wenzel and Hachisuka, Toshiya
    Robust and efficient rendering of complex lighting effects, such as caustics, remains a challenging task. While algorithms like vertex connection and merging can render such effects robustly, their significant overhead over a simple path tracer is not always justified and - as we show in this paper - also not necessary. In current rendering solutions, caustics often require the user to enable a specialized algorithm, usually a photon mapper, and hand-tune its parameters. But even with carefully chosen parameters, photon mapping may still trace many photons that the path tracer could sample well enough, or, even worse, that are not visible at all. Our goal is robust, yet lightweight, caustics rendering. To that end, we propose a technique to identify and focus computation on the photon paths that offer significant variance reduction over samples from a path tracer.We apply this technique in a rendering solution combining path tracing and photon mapping. The photon emission is automatically guided towards regions where the photons are useful, i.e., provide substantial variance reduction for the currently rendered image. Our method achieves better photon densities with fewer light paths (and thus photons) than emission guiding approaches based on visual importance. In addition, we automatically determine an appropriate number of photons for a given scene, and the algorithm gracefully degenerates to pure path tracing for scenes that do not benefit from photon mapping.
  • Item
    Runtime Shader Simplification via Instant Search in Reduced Optimization Space
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Yuan, Yazhen; Wang, Rui; Hu, Tianlei; Bao, Hujun; Jakob, Wenzel and Hachisuka, Toshiya
    Traditional automatic shader simplification simplifies shaders in an offline process, which is typically carried out in a contextoblivious manner or with the use of some example contexts, e.g., certain hardware platforms, scenes, and uniform parameters, etc. As a result, these pre-simplified shaders may fail at adapting to runtime changes of the rendering context that were not considered in the simplification process. In this paper, we propose a new automatic shader simplification technique, which explores two key aspects of a runtime simplification framework: the optimization space and the instant search for optimal simplified shaders with runtime context. The proposed technique still requires a preprocess stage to process the original shader. However, instead of directly computing optimal simplified shaders, the proposed preprocess generates a reduced shader optimization space. In particular, two heuristic estimates of the quality and performance of simplified shaders are presented to group similar variants into representative ones, which serve as basic graph nodes of the simplification dependency graph (SDG), a new representation of the optimization space. At the runtime simplification stage, a parallel discrete optimization algorithm is employed to instantly search in the SDG for optimal simplified shaders. New data-driven cost models are proposed to predict the runtime quality and performance of simplified shaders on the basis of data collected during runtime. Results show that the selected simplifications of complex shaders achieve 1.6 to 2.5 times speedup and still retain high rendering quality.
  • Item
    On-the-Fly Power-Aware Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Zhang, Yunjin; Ortín, Marta; Arellano, Victor; Wang, Rui; Gutierrez, Diego; Bao, Hujun; Jakob, Wenzel and Hachisuka, Toshiya
    Power saving is a prevailing concern in desktop computers and, especially, in battery-powered devices such as mobile phones. This is generating a growing demand for power-aware graphics applications that can extend battery life, while preserving good quality. In this paper, we address this issue by presenting a real-time power-efficient rendering framework, able to dynamically select the rendering configuration with the best quality within a given power budget. Different from the current state of the art, our method does not require precomputation of the whole camera-view space, nor Pareto curves to explore the vast power-error space; as such, it can also handle dynamic scenes. Our algorithm is based on two key components: our novel power prediction model, and our runtime quality error estimation mechanism. These components allow us to search for the optimal rendering configuration at runtime, being transparent to the user. We demonstrate the performance of our framework on two different platforms: a desktop computer, and a mobile device. In both cases, we produce results close to the maximum quality, while achieving significant power savings.
  • Item
    Quad-Based Fourier Transform for Efficient Diffraction Synthesis
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Scandolo, Leonardo; Lee, Sungkil; Eisemann, Elmar; Jakob, Wenzel and Hachisuka, Toshiya
    Far-field diffraction can be evaluated using the Discrete Fourier Transform (DFT) in image space but it is costly due to its dense sampling. We propose a technique based on a closed-form solution of the continuous Fourier transform for simple vector primitives (quads) and propose a hierarchical and progressive evaluation to achieve real-time performance. Our method is able to simulate diffraction effects in optical systems and can handle varying visibility due to dynamic light sources. Furthermore, it seamlessly extends to near-field diffraction. We show the benefit of our solution in various applications, including realistic real-time glare and bloom rendering.