43 results
Search Results
Now showing 1 - 10 of 43
Item ManyLoDs: Parallel Many-View Level-of-Detail Selection for Real-Time Global Illumination(The Eurographics Association and Blackwell Publishing Ltd., 2011) Holländer, Matthias; Ritschel, Tobias; Eisemann, Elmar; Boubekeur, Tamy; Ravi Ramamoorthi and Erik ReinhardLevel-of-Detail structures are a key component for scalable rendering. Built from raw 3D data, these structures are often defined as Bounding Volume Hierarchies, providing coarse-to-fine adaptive approximations that are well-adapted for many-view rasterization. Here, the total number of pixels in each view is usually low, while the cost of choosing the appropriate LoD for each view is high. This task represents a challenge for existing GPU algorithms. We propose ManyLoDs, a new GPU algorithm to efficiently compute many LoDs from a Bounding Volume Hierarchy in parallel by balancing the workload within and among LoDs. Our approach is not specific to a particular rendering technique, can be used on lazy representations such as polygon soups, and can handle dynamic scenes. We apply our method to various many-view rasterization applications, including Instant Radiosity, Point-Based Global Illumination, and reflection / refraction mapping. For each of these, we achieve real-time performance in complex scenes at high resolutions.Item Optimizing Disparity for Motion in Depth(The Eurographics Association and Blackwell Publishing Ltd., 2013) Kellnhofer, Petr; Ritschel, Tobias; Myszkowski, Karol; Seidel, Hans-Peter; Nicolas Holzschuch and Szymon RusinkiewiczBeyond the careful design of stereo acquisition equipment and rendering algorithms, disparity post-processing has recently received much attention, where one of the key tasks is to compress the originally large disparity range to avoid viewing discomfort. The perception of dynamic stereo content however, relies on reproducing the full disparity-time volume that a scene point undergoes in motion. This volume can be strongly distorted in manipulation, which is only concerned with changing disparity at one instant in time, even if the temporal coherence of that change is maintained. We propose an optimization to preserve stereo motion of content that was subject to an arbitrary disparity manipulation, based on a perceptual model of temporal disparity changes. Furthermore, we introduce a novel 3D warping technique to create stereo image pairs that conform to this optimized disparity map. The paper concludes with perceptual studies of motion reproduction quality and task performance in a simple game, showing how our optimization can achieve both viewing comfort and faithful stereo motion.Item Decomposing Single Images for Layered Photo Retouching(The Eurographics Association and John Wiley & Sons Ltd., 2017) Innamorati, Carlo; Ritschel, Tobias; Weyrich, Tim; Mitra, Niloy J.; Zwicker, Matthias and Sander, PedroPhotographers routinely compose multiple manipulated photos of the same scene into a single image, producing a fidelity difficult to achieve using any individual photo. Alternately, 3D artists set up rendering systems to produce layered images to isolate individual aspects of the light transport, which are composed into the final result in post-production. Regrettably, these approaches either take considerable time and effort to capture, or remain limited to synthetic scenes. In this paper, we suggest a method to decompose a single image into multiple layers that approximates effects such as shadow, diffuse illumination, albedo, and specular shading. To this end, we extend the idea of intrinsic images along two axes: first, by complementing shading and reflectance with specularity and occlusion, and second, by introducing directional dependence. We do so by training a convolutional neural network (CNN) with synthetic data. Such decompositions can then be manipulated in any off-the-shelf image manipulation software and composited back. We demonstrate the effectiveness of our decomposition on synthetic (i. e., rendered) and real data (i. e., photographs), and use them for photo manipulations, which are otherwise impossible to perform based on single images. We provide comparisons with state-of-the-art methods and also evaluate the quality of our decompositions via a user study measuring the effectiveness of the resultant photo retouching setup. Supplementary material and code are available for research use at geometry.cs.ucl.ac.uk/projects/2017/layered-retouching.Item Virtual Passepartouts(The Eurographics Association, 2012) Ritschel, Tobias; Templin, Krzysztof; Myszkowski, Karol; Seidel, Hans-Peter; Paul Asente and Cindy GrimmIn traditional media, such as photography and painting, a cardboard sheet with a cutout (called passepartout) is frequently placed on top of an image. One of its functions is to increase the depth impression via the ''looking-through-a-window'' metaphor. This paper shows how an improved 3D effect can be achieved by using a virtual passepartout: a 2D framing that selectively masks the 3D shape and leads to additional occlusion events between the virtual world and the frame. We introduce a pipeline to design virtual passepartouts interactively as a simple post-process on RGB images augmented with depth information. Additionally, an automated approach finds the optimal virtual passepartout for a given scene. Virtual passepartouts can be used to enhance depth depiction in images and videos with depth information, renderings, stereo images and the fabrication of physical passepartoutsItem Real-time Reflective and Refractive Novel-view Synthesis(The Eurographics Association, 2014) Lochmann, Gerrit; Reinert, Bernhard; Ritschel, Tobias; Müller, Stefan; Seidel, Hans-Peter; Jan Bender and Arjan Kuijper and Tatiana von Landesberger and Holger Theisel and Philipp UrbanWe extend novel-view image synthesis from the common diffuse and opaque image formation model to the reflective and refractive case. Our approach uses a ray tree of RGBZ images, where each node contains one RGB light path which is to be warped differently depending on the depth Z and the type of path. Core of our approach are two efficient procedures for reflective and refractive warping. Different from the diffuse and opaque case, no simple direct solution exists for general geometry. Instead, a per-pixel optimization in combination with informed initial guesses warps an HD image with reflections and refractions in 18 ms on a current mobile GPU. The key application is latency avoidance in remote rendering in particular for head-mounted displays. Other applications are single-pass stereo or multi-view, motion blur and depth-of-field rendering as well as their combinations.Item Deep-learning the Latent Space of Light Transport(The Eurographics Association and John Wiley & Sons Ltd., 2019) Hermosilla, Pedro; Maisch, Sebastian; Ritschel, Tobias; Ropinski, Timo; Boubekeur, Tamy and Sen, PradeepWe suggest a method to directly deep-learn light transport, i. e., the mapping from a 3D geometry-illumination-material configuration to a shaded 2D image. While many previous learning methods have employed 2D convolutional neural networks applied to images, we show for the first time that light transport can be learned directly in 3D. The benefit of 3D over 2D is, that the former can also correctly capture illumination effects related to occluded and/or semi-transparent geometry. To learn 3D light transport, we represent the 3D scene as an unstructured 3D point cloud, which is later, during rendering, projected to the 2D output image. Thus, we suggest a two-stage operator comprising a 3D network that first transforms the point cloud into a latent representation, which is later on projected to the 2D output image using a dedicated 3D-2D network in a second step. We will show that our approach results in improved quality in terms of temporal coherence while retaining most of the computational efficiency of common 2D methods. As a consequence, the proposed two stage-operator serves as a valuable extension to modern deferred shading approaches.Item Distortion-Free Displacement Mapping(The Eurographics Association and John Wiley & Sons Ltd., 2019) Zirr, Tobias; Ritschel, Tobias; Steinberger, Markus and Foley, TimDisplacement mapping is routinely used to add geometric details in a fast and easy-to-control way, both in offline rendering as well as recently in interactive applications such as games. However, it went largely unnoticed (with the exception of McGuire and Whitson [MW08]) that, when applying displacement mapping to a surface with a low-distortion parametrization, this parametrization is distorted as the geometry was changed by the displacement mapping. Typical resulting artifacts are ''rubber band''-like distortion patterns in areas of strong displacement change where a small isotropic area in texture space is mapped to a large anisotropic area in world space. We describe a fast, fully GPU-based two-step procedure to resolve this problem. First, a correction deformation is computed from the displacement map. Second, two variants to apply this correction when computing displacement mapping are proposed. The first variant is backward-compatible and can resolve the artifact in any rendering pipeline without modifying it and without requiring additional computation at render time, but only works for bijective parametrizations. The second variant works for more general parametrizations, but requires to modify the rendering code and incurs a very small computational overhead.Item Deep Shading: Convolutional Neural Networks for Screen Space Shading(The Eurographics Association and John Wiley & Sons Ltd., 2017) Nalbach, Oliver; Arabadzhiyska, Elena; Mehta, Dushyant; Seidel, Hans-Peter; Ritschel, Tobias; Zwicker, Matthias and Sander, PedroIn computer vision, convolutional neural networks (CNNs) achieve unprecedented performance for inverse problems where RGB pixel appearance is mapped to attributes such as positions, normals or reflectance. In computer graphics, screen space shading has boosted the quality of real-time rendering, converting the same kind of attributes of a virtual scene back to appearance, enabling effects like ambient occlusion, indirect light, scattering and many more. In this paper we consider the diagonal problem: synthesizing appearance from given per-pixel attributes using a CNN. The resulting Deep Shading renders screen space effects at competitive quality and speed while not being programmed by human experts but learned from example images.Item Efficient Multi-image Correspondences for On-line Light Field Video Processing(The Eurographics Association and John Wiley & Sons Ltd., 2016) Dąbała, Łukasz; Ziegler, Matthias; Didyk, Piotr; Zilly, Frederik; Keinert, Joachim; Myszkowski, Karol; Seidel, Hans-Peter; Rokita, Przemysław; Ritschel, Tobias; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiLight field videos express the entire visual information of an animated scene, but their shear size typically makes capture, processing and display an off-line process, i. e., time between initial capture and final display is far from real-time. In this paper we propose a solution for one of the key bottlenecks in such a processing pipeline, which is a reliable depth reconstruction possibly for many views. This is enabled by a novel correspondence algorithm converting the video streams from a sparse array of off-the-shelf cameras into an array of animated depth maps. The algorithm is based on a generalization of the classic multi-resolution Lucas-Kanade correspondence algorithm from a pair of images to an entire array. Special inter-image confidence consolidation allows recovery from unreliable matching in some locations and some views. It can be implemented efficiently in massively parallel hardware, allowing for interactive computations. The resulting depth quality as well as the computation performance compares favorably to other state-of-the art light field-to-depth approaches, as well as stereo matching techniques. Another outcome of this work is a data set of light field videos that are captured with multiple variants of sparse camera arrays.Item Interactive Appearance Editing in RGB-D Images(The Eurographics Association, 2014) Bergmann, Stephan; Ritschel, Tobias; Dachsbacher, Carsten; Jan Bender and Arjan Kuijper and Tatiana von Landesberger and Holger Theisel and Philipp UrbanThe availability of increasingly powerful and affordable image and depth sensors in conjunction with the necessary processing power creates novel possibilities for more sophisticated and powerful image editing tools. Along these lines we present a method to alter the appearance of objects in RGB-D images by re-shading their surfaces with arbitrary BRDF models and subsurface scattering using the dipole diffusion approximation. To evaluate the incident light for re-shading we combine ray marching using the depth buffer as approximate geometry and environment lighting. The environment map is built from information solely contained in the RGB-D input image exploiting both the reflections on glossy surfaces as well as geometric information. Our CPU/GPU implementation provides interactive feedback to facilitate intuitive editing.We compare and demonstrate our method with rendered images and digital photographs.