Search Results

Now showing 1 - 10 of 63
  • Item
    Geometry and Attribute Compression for Voxel Scenes
    (The Eurographics Association and John Wiley & Sons Ltd., 2016) Dado, Bas; Kol, Timothy R.; Bauszat, Pablo; Thiery, Jean-Marc; Eisemann, Elmar; Joaquim Jorge and Ming Lin
    Voxel-based approaches are today's standard to encode volume data. Recently, directed acyclic graphs (DAGs) were successfully used for compressing sparse voxel scenes as well, but they are restricted to a single bit of (geometry) information per voxel. We present a method to compress arbitrary data, such as colors, normals, or reflectance information. By decoupling geometry and voxel data via a novel mapping scheme, we are able to apply the DAG principle to encode the topology, while using a palette-based compression for the voxel attributes, leading to a drastic memory reduction. Our method outperforms existing state-of-the-art techniques and is well-suited for GPU architectures. We achieve real-time performance on commodity hardware for colored scenes with up to 17 hierarchical levels (a 128K3 voxel resolution), which are stored fully in core.
  • Item
    Recent Advances in Adaptive Sampling and Reconstruction for Monte Carlo Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2015) Zwicker, Matthias; Jarosz, Wojciech; Lehtinen, Jaakko; Moon, Bochang; Ramamoorthi, Ravi; Rousselle, Fabrice; Sen, Pradeep; Soler, Cyril; Yoon, Sungeui E.; K. Hormann and O. Staadt
    Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. Adaptive sampling and reconstruction algorithms reduce variance by controlling the sampling density and aggregating samples in a reconstruction step, possibly over large image regions. In this paper we survey recent advances in this area. We distinguish between “a priori” methods that analyze the light transport equations and derive sampling rates and reconstruction filters from this analysis, and “a posteriori” methods that apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. They typically estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. We discuss advantages and disadvantages of recent state-of-the-art techniques, and provide visual and quantitative comparisons. Some of these techniques are proving useful in real-world applications, and we aim to provide an overview for practitioners and researchers to assess these approaches. In addition, we discuss directions for potential further improvements.
  • Item
    Self Tuning Texture Optimization
    (The Eurographics Association and John Wiley & Sons Ltd., 2015) Kaspar, Alexandre; Neubert, Boris; Lischinski, Dani; Pauly, Mark; Kopf, Johannes; Olga Sorkine-Hornung and Michael Wimmer
    The goal of example-based texture synthesis methods is to generate arbitrarily large textures from limited exemplars in order to fit the exact dimensions and resolution required for a specific modeling task. The challenge is to faithfully capture all of the visual characteristics of the exemplar texture, without introducing obvious repetitions or unnatural looking visual elements. While existing non-parametric synthesis methods have made remarkable progress towards this goal, most such methods have been demonstrated only on relatively low-resolution exemplars. Real-world high resolution textures often contain texture details at multiple scales, which these methods have difficulty reproducing faithfully. In this work, we present a new general-purpose and fully automatic selftuning non-parametric texture synthesis method that extends Texture Optimization by introducing several key improvements that result in superior synthesis ability. Our method is able to self-tune its various parameters and weights and focuses on addressing three challenging aspects of texture synthesis: (i) irregular large scale structures are faithfully reproduced through the use of automatically generated and weighted guidance channels; (ii) repetition and smoothing of texture patches is avoided by new spatial uniformity constraints; (iii) a smart initialization strategy is used in order to improve the synthesis of regular and near-regular textures, without affecting textures that do not exhibit regularities. We demonstrate the versatility and robustness of our completely automatic approach on a variety of challenging high-resolution texture exemplars.
  • Item
    Real-time Content Adaptive Depth Retargeting for Light Field Displays
    (The Eurographics Association, 2015) Adhikarla, Vamsi Kiran; Marton, Fabio; Barsi, Attila; Kovács, Péter Tamás; Balogh, Tibor; Gobbetti, Enrico; B. Solenthaler and E. Puppo
    Light field display systems present visual scenes using a set of directional light beams emitted from multiple light sources as if they are emitted from points in a physical scene. These displays offer better angular resolution and therefore provide more depth of field than other automultiscopic displays. However in some cases the size of a scene may still exceed the available depth range of a light field display. Thus, rendering on these displays requires suitable adaptation of 3D content for providing comfortable viewing experience. We propose a content adaptive depth retargeting method to automatically modify the scene depth to suit to the needs of a light field display. By analyzing the scene and using display specific parameters, we formulate and solve an optimization problem to non-linearly adapt the scene depth to display depth. Our method synthesizes the depth retargeted light field content in real-time for supporting interactive visualization and also preserves the 3D appearance of the displayed objects as much as possible.
  • Item
    2D Points Curve Reconstruction Survey and Benchmark
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Ohrhallinger, Stefan; Peethambaran, Jiju; Parakkat, Amal Dev; Dey, Tamal Krishna; Muthuganapathy, Ramanathan; Bühler, Katja and Rushmeier, Holly
    Curve reconstruction from unstructured points in a plane is a fundamental problem with many applications that has generated research interest for decades. Involved aspects like handling open, sharp, multiple and non-manifold outlines, run-time and provability as well as potential extension to 3D for surface reconstruction have led to many different algorithms. We survey the literature on 2D curve reconstruction and then present an open-sourced benchmark for the experimental study. Our unprecedented evaluation of a selected set of planar curve reconstruction algorithms aims to give an overview of both quantitative analysis and qualitative aspects for helping users to select the right algorithm for specific problems in the field. Our benchmark framework is available online to permit reproducing the results and easy integration of new algorithms.
  • Item
    Inertial Steady 2D Vector Field Topology
    (The Eurographics Association and John Wiley & Sons Ltd., 2016) Günther, Tobias; Theisel, Holger; Joaquim Jorge and Ming Lin
    Vector field topology is a powerful and matured tool for the study of the asymptotic behavior of tracer particles in steady flows. Yet, it does not capture the behavior of finite-sized particles, because they develop inertia and do not move tangential to the flow. In this paper, we use the fact that the trajectories of inertial particles can be described as tangent curves of a higher dimensional vector field. Using this, we conduct a full classification of the first-order critical points of this higher dimensional flow, and devise a method to their efficient extraction. Further, we interactively visualize the asymptotic behavior of finite-sized particles by a glyph visualization that encodes the outcome of any initial condition of the governing ODE, i.e., for a varying initial position and/or initial velocity. With this, we present a first approach to extend traditional vector field topology to the inertial case.
  • Item
    Latency Considerations of Depth-first GPU Ray Tracing
    (The Eurographics Association, 2014) Guthe, Michael; Eric Galin and Michael Wand
    Despite the potential divergence of depth-first ray tracing [AL09], it is nevertheless the most efficient approach on massively parallel graphics processors. Due to the use of specialized caching strategies that were originally developed for texture access, it has been shown to be compute rather than bandwidth limited. Especially with recents developments however, not only the raw bandwidth, but also the latency for both memory access and read after write register dependencies can become a limiting factor. In this paper we will analyze the memory and instruction dependency latencies of depth first ray tracing. We will show that ray tracing is in fact latency limited on current GPUs and propose three simple strategies to better hide the latencies. This way, we come significantly closer to the maximum performance of the GPU.
  • Item
    Perception-driven Accelerated Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Weier, Martin; Stengel, Michael; Roth, Thorsten; Didyk, Piotr; Eisemann, Elmar; Eisemann, Martin; Grogorick, Steve; Hinkenjann, André; Kruijff, Ernst; Magnor, Marcus; Myszkowski, Karol; Slusallek, Philipp; Victor Ostromoukov and Matthias Zwicker
    Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real-time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi-view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.
  • Item
    Generalized Diffusion Curves: An Improved Vector Representation for Smooth-Shaded Images
    (The Eurographics Association and John Wiley & Sons Ltd., 2016) Jeschke, Stefan; Joaquim Jorge and Ming Lin
    This paper generalizes the well-known Diffusion Curves Images (DCI), which are composed of a set of Bezier curves with colors specified on either side. These colors are diffused as Laplace functions over the image domain, which results in smooth color gradients interrupted by the Bezier curves. Our new formulation allows for more color control away from the boundary, providing a similar expressive power as recent Bilaplace image models without introducing associated issues and computational costs. The new model is based on a special Laplace function blending and a new edge blur formulation. We demonstrate that given some user-defined boundary curves over an input raster image, fitting colors and edge blur from the image to the new model and subsequent editing and animation is equally convenient as with DCIs. Numerous examples and comparisons to DCIs are presented.
  • Item
    Regularizing Image Reconstruction for Gradient-Domain Rendering with Feature Patches
    (The Eurographics Association and John Wiley & Sons Ltd., 2016) Manzi, Marco; Vicini, Delio; Zwicker, Matthias; Joaquim Jorge and Ming Lin
    We present a novel algorithm to reconstruct high-quality images from sampled pixels and gradients in gradient-domain rendering. Our approach extends screened Poisson reconstruction by adding additional regularization constraints. Our key idea is to exploit local patches in feature images, which contain per-pixels normals, textures, position, etc., to formulate these constraints. We describe a GPU implementation of our approach that runs on the order of seconds on megapixel images. We demonstrate a significant improvement in image quality over screened Poisson reconstruction under the L1 norm. Because we adapt the regularization constraints to the noise level in the input, our algorithm is consistent and converges to the ground truth.