21 results
Search Results
Now showing 1 - 10 of 21
Item Next Event Estimation++: Visibility Mapping for Efficient Light Transport Simulation(The Eurographics Association and John Wiley & Sons Ltd., 2020) Guo, Jerry Jinfeng; Eisemann, Martin; Eisemann, Elmar; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueMonte-Carlo rendering requires determining the visibility between scene points as the most common and compute intense operation to establish paths between camera and light source. Unfortunately, many tests reveal occlusions and the corresponding paths do not contribute to the final image. In this work, we present next event estimation++ (NEE++): a visibility mapping technique to perform visibility tests in a more informed way by caching voxel to voxel visibility probabilities. We show two scenarios: Russian roulette style rejection of visibility tests and direct importance sampling of the visibility. We show applications to next event estimation and light sampling in a uni-directional path tracer, and light-subpath sampling in Bi-Directional Path Tracing. The technique is simple to implement, easy to add to existing rendering systems, and comes at almost no cost, as the required information can be directly extracted from the rendering process itself. It discards up to 80% of visibility tests on average, while reducing variance by ~20% compared to other state-of-the-art light sampling techniques with the same number of samples. It gracefully handles complex scenes with efficiency similar to Metropolis light transport techniques but with a more uniform convergence.Item Single-Image SVBRDF Estimation with Learned Gradient Descent(The Eurographics Association and John Wiley & Sons Ltd., 2024) Luo, Xuejiao; Scandolo, Leonardo; Bousseau, Adrien; Eisemann, Elmar; Bermano, Amit H.; Kalogerakis, EvangelosRecovering spatially-varying materials from a single photograph of a surface is inherently ill-posed, making the direct application of a gradient descent on the reflectance parameters prone to poor minima. Recent methods leverage deep learning either by directly regressing reflectance parameters using feed-forward neural networks or by learning a latent space of SVBRDFs using encoder-decoder or generative adversarial networks followed by a gradient-based optimization in latent space. The former is fast but does not account for the likelihood of the prediction, i.e., how well the resulting reflectance explains the input image. The latter provides a strong prior on the space of spatially-varying materials, but this prior can hinder the reconstruction of images that are too different from the training data. Our method combines the strengths of both approaches. We optimize reflectance parameters to best reconstruct the input image using a recurrent neural network, which iteratively predicts how to update the reflectance parameters given the gradient of the reconstruction likelihood. By combining a learned prior with a likelihood measure, our approach provides a maximum a posteriori estimate of the SVBRDF. Our evaluation shows that this learned gradient-descent method achieves state-of-the-art performance for SVBRDF estimation on synthetic and real images.Item Texture Browser: Feature-based Texture Exploration(The Eurographics Association and John Wiley & Sons Ltd., 2021) Luo, Xuejiao; Scandolo, Leonardo; Eisemann, Elmar; Borgo, Rita and Marai, G. Elisabeta and Landesberger, Tatiana vonTexture is a key characteristic in the definition of the physical appearance of an object and a crucial element in the creation process of 3D artists. However, retrieving a texture that matches an intended look from an image collection is difficult. Contrary to most photo collections, for which object recognition has proven quite useful, syntactic descriptions of texture characteristics is not straightforward, and even creating appropriate metadata is a very difficult task. In this paper, we propose a system to help explore large unlabeled collections of texture images. The key insight is that spatially grouping textures sharing similar features can simplify navigation. Our system uses a pre-trained convolutional neural network to extract high-level semantic image features, which are then mapped to a 2-dimensional location using an adaptation of t-SNE, a dimensionality-reduction technique. We describe an interface to visualize and explore the resulting distribution and provide a series of enhanced navigation tools, our prioritized t-SNE, scalable clustering, and multi-resolution embedding, to further facilitate exploration and retrieval tasks. Finally, we also present the results of a user evaluation that demonstrates the effectiveness of our solution.Item Quad-Based Fourier Transform for Efficient Diffraction Synthesis(The Eurographics Association and John Wiley & Sons Ltd., 2018) Scandolo, Leonardo; Lee, Sungkil; Eisemann, Elmar; Jakob, Wenzel and Hachisuka, ToshiyaFar-field diffraction can be evaluated using the Discrete Fourier Transform (DFT) in image space but it is costly due to its dense sampling. We propose a technique based on a closed-form solution of the continuous Fourier transform for simple vector primitives (quads) and propose a hierarchical and progressive evaluation to achieve real-time performance. Our method is able to simulate diffraction effects in optical systems and can handle varying visibility due to dynamic light sources. Furthermore, it seamlessly extends to near-field diffraction. We show the benefit of our solution in various applications, including realistic real-time glare and bloom rendering.Item Interactive Depixelization of Pixel Art through Spring Simulation(The Eurographics Association and John Wiley & Sons Ltd., 2023) Matusovic, Marko; Parakkat, Amal Dev; Eisemann, Elmar; Myszkowski, Karol; Niessner, MatthiasWe introduce an approach for converting pixel art into high-quality vector images. While much progress has been made on automatic conversion, there is an inherent ambiguity in pixel art, which can lead to a mismatch with the artist's original intent. Further, there is room for incorporating aesthetic preferences during the conversion. In consequence, this work introduces an interactive framework to enable users to guide the conversion process towards high-quality vector illustrations. A key idea of the method is to cast the conversion process into a spring-system optimization that can be influenced by the user. Hereby, it is possible to resolve various ambiguities that cannot be handled by an automatic algorithm.Item Spectral Gradient Sampling for Path Tracing(The Eurographics Association and John Wiley & Sons Ltd., 2018) Petitjean, Victor; Bauszat, Pablo; Eisemann, Elmar; Jakob, Wenzel and Hachisuka, ToshiyaSpectral Monte-Carlo methods are currently the most powerful techniques for simulating light transport with wavelengthdependent phenomena (e.g., dispersion, colored particle scattering, or diffraction gratings). Compared to trichromatic rendering, sampling the spectral domain requires significantly more samples for noise-free images. Inspired by gradient-domain rendering, which estimates image gradients, we propose spectral gradient sampling to estimate the gradients of the spectral distribution inside a pixel. These gradients can be sampled with a significantly lower variance by carefully correlating the path samples of a pixel in the spectral domain, and we introduce a mapping function that shifts paths with wavelength-dependent interactions. We compute the result of each pixel by integrating the estimated gradients over the spectral domain using a onedimensional screened Poisson reconstruction. Our method improves convergence and reduces chromatic noise from spectral sampling, as demonstrated by our implementation within a conventional path tracer.Item MegaViews: Scalable Many‐View Rendering With Concurrent Scene‐View Hierarchy Traversal(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Kol, Timothy R.; Bauszat, Pablo; Lee, Sungkil; Eisemann, Elmar; Chen, Min and Benes, BedrichWe present a scalable solution to render complex scenes from a large amount of viewpoints. While previous approaches rely either on a scene or a view hierarchy to process multiple elements together, we make full use of both, enabling sublinear performance in terms of views and scene complexity. By concurrently traversing the hierarchies, we efficiently find shared information among views to amortize rendering costs. One example application is many‐light global illumination. Our solution accelerates shadow map generation for virtual point lights, whose number can now be raised to over a million while maintaining interactive rates.Item A Multi-pass Method for Accelerated Spectral Sampling(The Eurographics Association and John Wiley & Sons Ltd., 2021) Ruit, Mark van de; Eisemann, Elmar; Zhang, Fang-Lue and Eisemann, Elmar and Singh, KaranSpectral Monte Carlo rendering can simulate advanced light phenomena, such as chromatic dispersion, but typically shows a slow convergence behavior. Properly sampling the spectral domain can be challenging in scenes with many complex spectral distributions. To this end, we propose a multi-pass approach. We build and store coarse screen-space estimates of incident spectral radiance and use these to then importance sample the spectral domain. Hereby, we lower variance and reduce noise with little overhead. Our method handles challenging scenarios with difficult spectral distributions, many different emitters, and participating media. Finally, it can be integrated into existing spectral rendering methods for an additional acceleration.Item ShutterApp: Spatio-temporal Exposure Control for Videos(The Eurographics Association and John Wiley & Sons Ltd., 2019) Salamon, Nestor; Billeter, Markus; Eisemann, Elmar; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonA camera's shutter controls the incoming light that is reaching the camera sensor. Different shutters lead to wildly different results, and are often used as a tool in movies for artistic purpose, e.g., they can indirectly control the effect of motion blur. However, a physical camera is limited to a single shutter setting at any given moment. ShutterApp enables users to define spatio-temporally-varying virtual shutters that go beyond the options available in real-world camera systems. A user provides a sparse set of annotations that define shutter functions at selected locations in key frames. From this input, our solution defines shutter functions for each pixel of the video sequence using a suitable interpolation technique, which are then employed to derive the output video. Our solution performs in real-time on commodity hardware. Hereby, users can explore different options interactively, leading to a new level of expressiveness without having to rely on specialized hardware or laborious editing.Item Editing Compressed High-resolution Voxel Scenes with Attributes(The Eurographics Association and John Wiley & Sons Ltd., 2023) Molenaar, Mathijs; Eisemann, Elmar; Myszkowski, Karol; Niessner, MatthiasSparse Voxel Directed Acyclic Graphs (SVDAGs) are an efficient solution for storing high-resolution voxel geometry. Recently, algorithms for the interactive modification of SVDAGs have been proposed that maintain the compressed geometric representation. Nevertheless, voxel attributes, such as colours, require an uncompressed storage, which can result in high memory usage over the course of the application. The reason is the high cost of existing attribute-compression schemes which remain unfit for interactive applications. In this paper, we introduce two attribute compression methods (lossless and lossy), which enable the interactive editing of compressed high-resolution voxel scenes including attributes.
- «
- 1 (current)
- 2
- 3
- »