Search Results

Now showing 1 - 10 of 37
  • Item
    Optimizing Disparity for Motion in Depth
    (The Eurographics Association and Blackwell Publishing Ltd., 2013) Kellnhofer, Petr; Ritschel, Tobias; Myszkowski, Karol; Seidel, Hans-Peter; Nicolas Holzschuch and Szymon Rusinkiewicz
    Beyond the careful design of stereo acquisition equipment and rendering algorithms, disparity post-processing has recently received much attention, where one of the key tasks is to compress the originally large disparity range to avoid viewing discomfort. The perception of dynamic stereo content however, relies on reproducing the full disparity-time volume that a scene point undergoes in motion. This volume can be strongly distorted in manipulation, which is only concerned with changing disparity at one instant in time, even if the temporal coherence of that change is maintained. We propose an optimization to preserve stereo motion of content that was subject to an arbitrary disparity manipulation, based on a perceptual model of temporal disparity changes. Furthermore, we introduce a novel 3D warping technique to create stereo image pairs that conform to this optimized disparity map. The paper concludes with perceptual studies of motion reproduction quality and task performance in a simple game, showing how our optimization can achieve both viewing comfort and faithful stereo motion.
  • Item
    Pattern Search in Flows based on Similarity of Stream Line Segments
    (The Eurographics Association, 2014) Wang, Zhongjie; Esturo, Janick Martinez; Seidel, Hans-Peter; Weinkauf, Tino; Jan Bender and Arjan Kuijper and Tatiana von Landesberger and Holger Theisel and Philipp Urban
    We propose a method that allows users to define flow features in form of patterns represented as sparse sets of stream line segments. Our approach finds ''similar'' occurrences in the same or other time steps. Related approaches define patterns using dense, local stencils or support only single segments. Our patterns are defined sparsely and can have a significant extent, i.e., they are integration-based and not local. This allows for a greater flexibility in defining features of interest. Similarity is measured using intrinsic curve properties only, which enables invariance to location, orientation, and scale. Our method starts with splitting stream lines using globally-consistent segmentation criteria. It strives to maintain the visually apparent features of the flow as a collection of stream line segments. Most importantly, it provides similar segmentations for similar flow structures. For user-defined patterns of curve segments, our algorithm finds similar ones that are invariant to similarity transformations. We showcase the utility of our method using different 2D and 3D flow fields.
  • Item
    Interactive Motion Mapping for Real-time Character Control
    (The Eurographics Association and John Wiley and Sons Ltd., 2014) Rhodin, Helge; Tompkin, James; Kim, Kwang In; Varanasi, Kiran; Seidel, Hans-Peter; Theobalt, Christian; B. Levy and J. Kautz
    Abstract It is now possible to capture the 3D motion of the human body on consumer hardware and to puppet in real time skeleton-based virtual characters. However, many characters do not have humanoid skeletons. Characters such as spiders and caterpillars do not have boned skeletons at all, and these characters have very different shapes and motions. In general, character control under arbitrary shape and motion transformations is unsolved - how might these motions be mapped? We control characters with a method which avoids the rigging-skinning pipeline - source and target characters do not have skeletons or rigs. We use interactively-defined sparse pose correspondences to learn a mapping between arbitrary 3D point source sequences and mesh target sequences. Then, we puppet the target character in real time. We demonstrate the versatility of our method through results on diverse virtual characters with different input motion controllers. Our method provides a fast, flexible, and intuitive interface for arbitrary motion mapping which provides new ways to control characters for real-time animation.
  • Item
    Real-time Reflective and Refractive Novel-view Synthesis
    (The Eurographics Association, 2014) Lochmann, Gerrit; Reinert, Bernhard; Ritschel, Tobias; Müller, Stefan; Seidel, Hans-Peter; Jan Bender and Arjan Kuijper and Tatiana von Landesberger and Holger Theisel and Philipp Urban
    We extend novel-view image synthesis from the common diffuse and opaque image formation model to the reflective and refractive case. Our approach uses a ray tree of RGBZ images, where each node contains one RGB light path which is to be warped differently depending on the depth Z and the type of path. Core of our approach are two efficient procedures for reflective and refractive warping. Different from the diffuse and opaque case, no simple direct solution exists for general geometry. Instead, a per-pixel optimization in combination with informed initial guesses warps an HD image with reflections and refractions in 18 ms on a current mobile GPU. The key application is latency avoidance in remote rendering in particular for head-mounted displays. Other applications are single-pass stereo or multi-view, motion blur and depth-of-field rendering as well as their combinations.
  • Item
    Deep Shading: Convolutional Neural Networks for Screen Space Shading
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Nalbach, Oliver; Arabadzhiyska, Elena; Mehta, Dushyant; Seidel, Hans-Peter; Ritschel, Tobias; Zwicker, Matthias and Sander, Pedro
    In computer vision, convolutional neural networks (CNNs) achieve unprecedented performance for inverse problems where RGB pixel appearance is mapped to attributes such as positions, normals or reflectance. In computer graphics, screen space shading has boosted the quality of real-time rendering, converting the same kind of attributes of a virtual scene back to appearance, enabling effects like ambient occlusion, indirect light, scattering and many more. In this paper we consider the diagonal problem: synthesizing appearance from given per-pixel attributes using a CNN. The resulting Deep Shading renders screen space effects at competitive quality and speed while not being programmed by human experts but learned from example images.
  • Item
    Efficient Multi-image Correspondences for On-line Light Field Video Processing
    (The Eurographics Association and John Wiley & Sons Ltd., 2016) Dąbała, Łukasz; Ziegler, Matthias; Didyk, Piotr; Zilly, Frederik; Keinert, Joachim; Myszkowski, Karol; Seidel, Hans-Peter; Rokita, Przemysław; Ritschel, Tobias; Eitan Grinspun and Bernd Bickel and Yoshinori Dobashi
    Light field videos express the entire visual information of an animated scene, but their shear size typically makes capture, processing and display an off-line process, i. e., time between initial capture and final display is far from real-time. In this paper we propose a solution for one of the key bottlenecks in such a processing pipeline, which is a reliable depth reconstruction possibly for many views. This is enabled by a novel correspondence algorithm converting the video streams from a sparse array of off-the-shelf cameras into an array of animated depth maps. The algorithm is based on a generalization of the classic multi-resolution Lucas-Kanade correspondence algorithm from a pair of images to an entire array. Special inter-image confidence consolidation allows recovery from unreliable matching in some locations and some views. It can be implemented efficiently in massively parallel hardware, allowing for interactive computations. The resulting depth quality as well as the computation performance compares favorably to other state-of-the art light field-to-depth approaches, as well as stereo matching techniques. Another outcome of this work is a data set of light field videos that are captured with multiple variants of sparse camera arrays.
  • Item
    Perceptually-motivated Stereoscopic Film Grain
    (The Eurographics Association and John Wiley and Sons Ltd., 2014) Templin, Krzysztof; Didyk, Piotr; Myszkowski, Karol; Seidel, Hans-Peter; J. Keyser, Y. J. Kim, and P. Wonka
    Independent management of film grain in each view of a stereoscopic video can lead to visual discomfort. The existing alternative is to project the grain onto the scene geometry. Such grain, however, looks unnatural, changes object perception, and emphasizes inaccuracies in depth arising during 2D-to-3D conversion. We propose an advanced method of grain positioning that scatters the grain in the scene space. In a series of perceptual experiments, we estimate the optimal parameter values for the proposed method, analyze the user preference distribution among the proposed and the two existing methods, and show influence of the method on the object perception.
  • Item
    Manipulating Refractive and Reflective Binocular Disparity
    (The Eurographics Association and John Wiley and Sons Ltd., 2014) Dabala, Lukasz; Kellnhofer, Petr; Ritschel, Tobias; Didyk, Piotr; Templin, Krzysztof; Myszkowski, Karol; Rokita, P.; Seidel, Hans-Peter; B. Levy and J. Kautz
    Presenting stereoscopic content on 3D displays is a challenging task, usually requiring manual adjustments. A number of techniques have been developed to aid this process, but they account for binocular disparity of surfaces that are diffuse and opaque only. However, combinations of transparent as well as specular materials are common in the real and virtual worlds, and pose a significant problem. For example, excessive disparities can be created which cannot be fused by the observer. Also, multiple stereo interpretations become possible, e. g., for glass, that both reflects and refracts, which may confuse the observer and result in poor 3D experience. In this work, we propose an efficient method for analyzing and controlling disparities in computer-generated images of such scenes where surface positions and a layer decomposition are available. Instead of assuming a single per-pixel disparity value, we estimate all possibly perceived disparities at each image location. Based on this representation, we define an optimization to find the best per-pixel camera parameters, assuring that all disparities can be easily fused by a human. A preliminary perceptual study indicates, that our approach combines comfortable viewing with realistic depiction of typical specular scenes.
  • Item
    Spectral Ray Differentials
    (The Eurographics Association and John Wiley and Sons Ltd., 2014) Elek, Oskar; Bauszat, Pablo; Ritschel, Tobias; Magnor, Marcus; Seidel, Hans-Peter; Wojciech Jarosz and Pieter Peers
    Light refracted by a dispersive interface leads to beautifully colored patterns that can be rendered faithfully with spectral Monte-Carlo methods. Regrettably, results often suffer from chromatic noise or banding, requiring high sampling rates and large amounts of memory compared to renderers operating in some trichromatic color space. Addressing this issue, we introduce spectral ray differentials, which describe the change of light direction with respect to changes in the spectrum. In analogy with the classic ray and photon differentials, this information can be used for filtering in the spectral domain. Effectiveness of our approach is demonstrated by filtering for offline spectral light and path tracing as well as for an interactive GPU photon mapper based on splatting. Our results show considerably less chromatic noise and spatial aliasing while retaining good visual similarity to reference solutions with negligible overhead in the order of milliseconds.
  • Item
    Sky Based Light Metering for High Dynamic Range Images
    (The Eurographics Association and John Wiley and Sons Ltd., 2014) Gryaditskya, Yulia; Pouli, Tania; Reinhard, Erik; Seidel, Hans-Peter; J. Keyser, Y. J. Kim, and P. Wonka
    Image calibration requires both linearization of pixel values and scaling so that values in the image correspond to real-world luminances. In this paper we focus on the latter and rather than rely on camera characterization, we calibrate images by analysing their content and metadata, obviating the need for expensive measuring devices or modeling of lens and camera combinations. Our analysis correlates sky pixel values to luminances that would be expected based on geographical metadata. Combined with high dynamic range (HDR) imaging, which gives us linear pixel data, our algorithm allows us to find absolute luminance values for each pixel-effectively turning digital cameras into absolute light meters. To validate our algorithm we have collected and annotated a calibrated set of HDR images and compared our estimation with several other approaches, showing that our approach is able to more accurately recover absolute luminance. We discuss various applications and demonstrate the utility of our method in the context of calibrated color appearance reproduction and lighting design.