9 results
Search Results
Now showing 1 - 9 of 9
Item Low‐Cost Subpixel Rendering for Diverse Displays(The Eurographics Association and John Wiley and Sons Ltd., 2014) Engelhardt, Thomas; Schmidt, Thorsten‐Walther; Kautz, Jan; Dachsbacher, Carsten; Holly Rushmeier and Oliver DeussenSubpixel rendering increases the apparent display resolution by taking into account the subpixel structure of a given display. In essence, each subpixel is addressed individually, allowing the underlying signal to be sampled more densely. Unfortunately, naïve subpixel sampling introduces colour aliasing, as each subpixel only displays a specific colour (usually R, G and B subpixels are used). As previous work has shown, chromatic aliasing can be reduced significantly by taking the sensitivity of the human visual system into account. In this work, we find optimal filters for subpixel rendering for a diverse set of 1D and 2D subpixel layout patterns. We demonstrate that these optimal filters can be approximated well with analytical functions. We incorporate our filters into GPU‐based multi‐sample anti‐aliasing to yield subpixel rendering at a very low cost (1–2 ms filtering time at HD resolution). We also show that texture filtering can be adapted to perform efficient subpixel rendering. Finally, we analyse the findings of a user study we performed, which underpins the increased visual fidelity that can be achieved for diverse display layouts, by using our optimal filters.Subpixel rendering increases the apparent display resolution by taking into account the subpixel structure of a given display. In essence, each subpixel is addressed individually, allowing the underlying signal to be sampled more densely. Unfortunately, naïve subpixel sampling introduces colour aliasing, as each subpixel only displays a specific colour (usually R, G, and B subpixels are used). As previous work has shown, chromatic aliasing can be reduced significantly by taking the sensitivity of the human visual system into account. In this work, wefind optimal filters for subpixel rendering for a diverse set of 1D and 2D subpixel layout patterns.Item Interactive Appearance Editing in RGB-D Images(The Eurographics Association, 2014) Bergmann, Stephan; Ritschel, Tobias; Dachsbacher, Carsten; Jan Bender and Arjan Kuijper and Tatiana von Landesberger and Holger Theisel and Philipp UrbanThe availability of increasingly powerful and affordable image and depth sensors in conjunction with the necessary processing power creates novel possibilities for more sophisticated and powerful image editing tools. Along these lines we present a method to alter the appearance of objects in RGB-D images by re-shading their surfaces with arbitrary BRDF models and subsurface scattering using the dipole diffusion approximation. To evaluate the incident light for re-shading we combine ray marching using the depth buffer as approximate geometry and environment lighting. The environment map is built from information solely contained in the RGB-D input image exploiting both the reflections on glossy surfaces as well as geometric information. Our CPU/GPU implementation provides interactive feedback to facilitate intuitive editing.We compare and demonstrate our method with rendered images and digital photographs.Item Efficient Monte Carlo Rendering with Realistic Lenses(The Eurographics Association and John Wiley and Sons Ltd., 2014) Hanika, Johannes; Dachsbacher, Carsten; B. Levy and J. KautzIn this paper we present a novel approach to simulate image formation for a wide range of real world lenses in the Monte Carlo ray tracing framework. Our approach sidesteps the overhead of tracing rays through a system of lenses and requires no tabulation. To this end we first improve the precision of polynomial optics to closely match ground-truth ray tracing. Second, we show how the Jacobian of the optical system enables efficient importance sampling, which is crucial for difficult paths such as sampling the aperture which is hidden behind lenses on both sides. Our results show that this yields converged images significantly faster than previous methods and accurately renders complex lens systems with negligible overhead compared to simple models, e.g. the thin lens model. We demonstrate the practicality of our method by incorporating it into a bidirectional path tracing framework and show how it can provide information needed for sophisticated light transport algorithms.Item State of the Art in Artistic Editing of Appearance, Lighting, and Material(The Eurographics Association, 2014) Schmidt, Thorsten-Walther; Pellacini, Fabio; Nowrouzezahrai, Derek; Jarosz, Wojciech; Dachsbacher, Carsten; Sylvain Lefebvre and Michela SpagnuoloMimicking the appearance of the real world is a longstanding goal of computer graphics, with several important applications in the feature-film, architecture and medical industries. Images with well-designed shading are an important tool for conveying information about the world, be it the shape and function of a CAD model, or the mood of a movie sequence. However, authoring this content is often a tedious task, even if undertaken by groups of highly-trained and experienced artists. Unsurprisingly, numerous methods to facilitate and accelerate this appearance editing task have been proposed, enabling the editing of scene objects' appearances, lighting, and materials, as well as entailing the introduction of new interaction paradigms and specialized preview rendering techniques. In this STAR we provide a comprehensive survey of artistic appearance, lighting, and material editing approaches. We organize this complex and active research area in a structure tailored to academic researchers, graduate students, and industry professionals alike. In addition to editing approaches, we discuss how user interaction paradigms and rendering backends combine to form usable systems for appearance editing. We conclude with a discussion of open problems and challenges to motivate and guide future research.Item Dual-Color Mixing for Fused Deposition Modeling Printers(The Eurographics Association and John Wiley and Sons Ltd., 2014) Reiner, Tim; Carr, Nathan; Mech, Radomir; Stava, Ondrej; Dachsbacher, Carsten; Miller, Gavin; B. Levy and J. KautzIn this work we detail a method that leverages the two color heads of recent low-end fused deposition modeling (FDM) 3D printers to produce continuous tone imagery. The challenge behind producing such two-tone imagery is how to finely interleave the two colors while minimizing the switching between print heads, making each color printed span as long and continuous as possible to avoid artifacts associated with printing short segments. The key insight behind our work is that by applying small geometric offsets, tone can be varied without the need to switch color print heads within a single layer. We can now effectively print (two-tone) texture mapped models capturing both geometric and color information in our output 3D prints.Item Fractional Reyes-Style Adaptive Tessellation for Continuous Level of Detail(The Eurographics Association and John Wiley and Sons Ltd., 2014) Liktor, Gabor; Pan, Minghao; Dachsbacher, Carsten; J. Keyser, Y. J. Kim, and P. WonkaIn this paper we present a fractional parametric splitting scheme for Reyes-style adaptive tessellation. Our parallel algorithm generates crack-free tessellation from a parametric surface, which is also free of sudden temporal changes under animation. Continuous level of detail is not addressed by existing Reyes-style methods, since these aim to produce subpixel-sized micropolygons, where topology changes are no longer noticeable. Using our method, rendering pipelines that use larger triangles, thus sensitive to geometric popping, may also benefit from the quality of the split-dice tessellation stages of Reyes. We demonstrate results on a real-time GPU implementation, going beyond the limited quality and resolution of the hardware tessellation unit. In contrast to previous split-dice methods, our split stage is compatible with the fractional hardware tessellation scheme that has been designed for continuous level of detail.Item Scalable Realistic Rendering with Many‐Light Methods(The Eurographics Association and John Wiley and Sons Ltd., 2014) Dachsbacher, Carsten; Křivánek, Jaroslav; Hašan, Miloš; Arbree, Adam; Walter, Bruce; Novák, Jan; Holly Rushmeier and Oliver DeussenRecent years have seen increasing attention and significant progress in many‐light rendering, a class of methods for efficient computation of global illumination. The many‐light formulation offers a unified mathematical framework for the problem reducing the full lighting transport simulation to the calculation of the direct illumination from many virtual light sources. These methods are unrivaled in their scalability: they are able to produce plausible images in a fraction of a second but also converge to the full solution over time. In this state‐of‐the‐art report, we give an easy‐to‐follow, introductory tutorial of the many‐light theory; provide a comprehensive, unified survey of the topic with a comparison of the main algorithms; discuss limitations regarding materials and light transport phenomena and present a vision to motivate and guide future research. We will cover both the fundamental concepts as well as improvements, extensions and applications of many‐light rendering.Recent years have seen increasing attention and significant progress in many‐light rendering, a class of methods for efficient computation of global illumination. The many‐light formulation offers a unified mathematical framework for the problem reducing the full lighting transport simulation to the calculation of the direct illumination from many virtual light sources. These methods are unrivaled in their scalability: they are able to produce plausible images in a fraction of a second but also converge to the full solution over time. In this state‐of‐the‐art report, we give an easy‐to‐follow, introductory tutorial of the many‐light theory.Item Clustered Pre-convolved Radiance Caching(The Eurographics Association, 2014) Rehfeld, Hauke; Zirr, Tobias; Dachsbacher, Carsten; Margarita Amor and Markus HadwigerWe present a scalable method for rendering indirect illumination in diffuse and glossy scenes. Our method builds on pre-convolved radiance caching (RC), which enables reusing the incident radiance computed at a surface point for its neighborhood. Our contributions include efficient and robust generation of these RCs based on a pre-filtered voxel representation that stores scene-geometry and surface illumination. In addition, we describe a distribution strategy that places the RCs according to screen-space clusters to ensure all pixels have valid radiance data when evaluating indirect illumination. The results demonstrate the scalability of our method and analyze the relation between render quality, surface glossiness and computation time, which depends on the number of caches and their resolution.Item Precomputing Sound Scattering for Structured Surfaces(The Eurographics Association, 2014) Mückl, Gregor; Dachsbacher, Carsten; Margarita Amor and Markus HadwigerRoom acoustic simulations commonly use simple models for sound scattering on surfaces in the scene. However, the continuing increase of available parallel computing power makes it possible to apply more sophisticated models. We present a method to precompute the distribution of the reflected sound off a structured surface described by a height map and normal map using the Kirchhoff approximation. Our precomputation and interpolation scheme, based on representing the reflected pressure with von-Mises-Fisher functions, is able to retain many directional and spectral features of the reflected pressure while keeping the computational and storage requirements low. We discuss our model and demonstrate applications of our precomputed functions in acoustic ray tracing and a novel interactive method suitable for applications such as architectural walk-throughs and video games.