Search Results

Now showing 1 - 10 of 22
  • Item
    Optimizing Disparity for Motion in Depth
    (The Eurographics Association and Blackwell Publishing Ltd., 2013) Kellnhofer, Petr; Ritschel, Tobias; Myszkowski, Karol; Seidel, Hans-Peter; Nicolas Holzschuch and Szymon Rusinkiewicz
    Beyond the careful design of stereo acquisition equipment and rendering algorithms, disparity post-processing has recently received much attention, where one of the key tasks is to compress the originally large disparity range to avoid viewing discomfort. The perception of dynamic stereo content however, relies on reproducing the full disparity-time volume that a scene point undergoes in motion. This volume can be strongly distorted in manipulation, which is only concerned with changing disparity at one instant in time, even if the temporal coherence of that change is maintained. We propose an optimization to preserve stereo motion of content that was subject to an arbitrary disparity manipulation, based on a perceptual model of temporal disparity changes. Furthermore, we introduce a novel 3D warping technique to create stereo image pairs that conform to this optimized disparity map. The paper concludes with perceptual studies of motion reproduction quality and task performance in a simple game, showing how our optimization can achieve both viewing comfort and faithful stereo motion.
  • Item
    Virtual Passepartouts
    (The Eurographics Association, 2012) Ritschel, Tobias; Templin, Krzysztof; Myszkowski, Karol; Seidel, Hans-Peter; Paul Asente and Cindy Grimm
    In traditional media, such as photography and painting, a cardboard sheet with a cutout (called passepartout) is frequently placed on top of an image. One of its functions is to increase the depth impression via the ''looking-through-a-window'' metaphor. This paper shows how an improved 3D effect can be achieved by using a virtual passepartout: a 2D framing that selectively masks the 3D shape and leads to additional occlusion events between the virtual world and the frame. We introduce a pipeline to design virtual passepartouts interactively as a simple post-process on RGB images augmented with depth information. Additionally, an automated approach finds the optimal virtual passepartout for a given scene. Virtual passepartouts can be used to enhance depth depiction in images and videos with depth information, renderings, stereo images and the fabrication of physical passepartouts
  • Item
    Consistent Filtering of Videos and Dense Light-Fields Without Optic-Flow
    (The Eurographics Association, 2019) Shekhar, Sumit; Semmo, Amir; Trapp, Matthias; Tursun, Okan; Pasewaldt, Sebastian; Myszkowski, Karol; Döllner, Jürgen; Schulz, Hans-Jörg and Teschner, Matthias and Wimmer, Michael
    A convenient post-production video processing approach is to apply image filters on a per-frame basis. This allows the flexibility of extending image filters-originally designed for still images-to videos. However, per-image filtering may lead to temporal inconsistencies perceived as unpleasant flickering artifacts, which is also the case for dense light-fields due to angular inconsistencies. In this work, we present a method for consistent filtering of videos and dense light-fields that addresses these problems. Our assumption is that inconsistencies-due to per-image filtering-are represented as noise across the image sequence. We thus perform denoising across the filtered image sequence and combine per-image filtered results with their denoised versions. At this, we use saliency based optimization weights to produce a consistent output while preserving the details simultaneously. To control the degree-of-consistency in the final output, we implemented our approach in an interactive real-time processing framework. Unlike state-of-the-art inconsistency removal techniques, our approach does not rely on optic-flow for enforcing coherence. Comparisons and a qualitative evaluation indicate that our method provides better results over state-of-the-art approaches for certain types of filters and applications.
  • Item
    Selecting Texture Resolution Using a Task-specific Visibility Metric
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Wolski, Krzysztof; Giunchi, Daniele; Kinuwaki, Shinichi; Didyk, Piotr; Myszkowski, Karol; Steed, Anthony; Mantiuk, Rafal K.; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    In real-time rendering, the appearance of scenes is greatly affected by the quality and resolution of the textures used for image synthesis. At the same time, the size of textures determines the performance and the memory requirements of rendering. As a result, finding the optimal texture resolution is critical, but also a non-trivial task since the visibility of texture imperfections depends on underlying geometry, illumination, interactions between several texture maps, and viewing positions. Ideally, we would like to automate the task with a visibility metric, which could predict the optimal texture resolution. To maximize the performance of such a metric, it should be trained on a given task. This, however, requires sufficient user data which is often difficult to obtain. To address this problem, we develop a procedure for training an image visibility metric for a specific task while reducing the effort required to collect new data. The procedure involves generating a large dataset using an existing visibility metric followed by refining that dataset with the help of an efficient perceptual experiment. Then, such a refined dataset is used to retune the metric. This way, we augment sparse perceptual data to a large number of per-pixel annotated visibility maps which serve as the training data for application-specific visibility metrics. While our approach is general and can be potentially applied for different image distortions, we demonstrate an application in a game-engine where we optimize the resolution of various textures, such as albedo and normal maps.
  • Item
    Perception-driven Accelerated Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Weier, Martin; Stengel, Michael; Roth, Thorsten; Didyk, Piotr; Eisemann, Elmar; Eisemann, Martin; Grogorick, Steve; Hinkenjann, André; Kruijff, Ernst; Magnor, Marcus; Myszkowski, Karol; Slusallek, Philipp; Victor Ostromoukov and Matthias Zwicker
    Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real-time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi-view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.
  • Item
    Efficient Multi-image Correspondences for On-line Light Field Video Processing
    (The Eurographics Association and John Wiley & Sons Ltd., 2016) Dąbała, Łukasz; Ziegler, Matthias; Didyk, Piotr; Zilly, Frederik; Keinert, Joachim; Myszkowski, Karol; Seidel, Hans-Peter; Rokita, Przemysław; Ritschel, Tobias; Eitan Grinspun and Bernd Bickel and Yoshinori Dobashi
    Light field videos express the entire visual information of an animated scene, but their shear size typically makes capture, processing and display an off-line process, i. e., time between initial capture and final display is far from real-time. In this paper we propose a solution for one of the key bottlenecks in such a processing pipeline, which is a reliable depth reconstruction possibly for many views. This is enabled by a novel correspondence algorithm converting the video streams from a sparse array of off-the-shelf cameras into an array of animated depth maps. The algorithm is based on a generalization of the classic multi-resolution Lucas-Kanade correspondence algorithm from a pair of images to an entire array. Special inter-image confidence consolidation allows recovery from unreliable matching in some locations and some views. It can be implemented efficiently in massively parallel hardware, allowing for interactive computations. The resulting depth quality as well as the computation performance compares favorably to other state-of-the art light field-to-depth approaches, as well as stereo matching techniques. Another outcome of this work is a data set of light field videos that are captured with multiple variants of sparse camera arrays.
  • Item
    Perceptually-motivated Stereoscopic Film Grain
    (The Eurographics Association and John Wiley and Sons Ltd., 2014) Templin, Krzysztof; Didyk, Piotr; Myszkowski, Karol; Seidel, Hans-Peter; J. Keyser, Y. J. Kim, and P. Wonka
    Independent management of film grain in each view of a stereoscopic video can lead to visual discomfort. The existing alternative is to project the grain onto the scene geometry. Such grain, however, looks unnatural, changes object perception, and emphasizes inaccuracies in depth arising during 2D-to-3D conversion. We propose an advanced method of grain positioning that scatters the grain in the scene space. In a series of perceptual experiments, we estimate the optimal parameter values for the proposed method, analyze the user preference distribution among the proposed and the two existing methods, and show influence of the method on the object perception.
  • Item
    Mapping Images to Target Devices: Spatial, Temporal, Stereo, Tone, and Color
    (The Eurographics Association, 2012) Banterle, Francesco; Artusi, Alessandro; Aydin, Tunc O.; Didyk, Piotr; Eisemann, Elmar; Gutierrez, Diego; Mantiuk, Rafael; Myszkowski, Karol; Ritschel, Tobias; Renato Pajarola and Michela Spagnuolo
    Retargeting is a process through which an image or a video is adapted from the display device for which it was meant (target display) to another one (retarget display). The retarget display can have different features from the target one such as: dynamic range, discretization levels, color gamut, multi-view (3D), refresh rate, spatial resolution, etc. This tutorial presents the latest solutions and techniques for retargeting images along various dimensions (such as dynamic range, colors, temporal and spatial resolutions) and offers for the first time a much-needed holistic view of the field. This includes how to measure and analyze the changes applied to an image/video in terms of quality using both (subjective) psychophysical experiments and (objective) computational metrics.
  • Item
    Manipulating Refractive and Reflective Binocular Disparity
    (The Eurographics Association and John Wiley and Sons Ltd., 2014) Dabala, Lukasz; Kellnhofer, Petr; Ritschel, Tobias; Didyk, Piotr; Templin, Krzysztof; Myszkowski, Karol; Rokita, P.; Seidel, Hans-Peter; B. Levy and J. Kautz
    Presenting stereoscopic content on 3D displays is a challenging task, usually requiring manual adjustments. A number of techniques have been developed to aid this process, but they account for binocular disparity of surfaces that are diffuse and opaque only. However, combinations of transparent as well as specular materials are common in the real and virtual worlds, and pose a significant problem. For example, excessive disparities can be created which cannot be fused by the observer. Also, multiple stereo interpretations become possible, e. g., for glass, that both reflects and refracts, which may confuse the observer and result in poor 3D experience. In this work, we propose an efficient method for analyzing and controlling disparities in computer-generated images of such scenes where surface positions and a layer decomposition are available. Instead of assuming a single per-pixel disparity value, we estimate all possibly perceived disparities at each image location. Based on this representation, we define an optimization to find the best per-pixel camera parameters, assuring that all disparities can be easily fused by a human. A preliminary perceptual study indicates, that our approach combines comfortable viewing with realistic depiction of typical specular scenes.
  • Item
    NoRM: No-Reference Image Quality Metric for Realistic Image Synthesis
    (The Eurographics Association and John Wiley and Sons Ltd., 2012) Herzog, Robert; Cadík, Martin; Aydin, Tunç O.; Kim, Kwang In; Myszkowski, Karol; Seidel, Hans-Peter; P. Cignoni and T. Ertl
    Synthetically generating images and video frames of complex 3D scenes using some photo-realistic rendering software is often prone to artifacts and requires expert knowledge to tune the parameters. The manual work required for detecting and preventing artifacts can be automated through objective quality evaluation of synthetic images. Most practical objective quality assessment methods of natural images rely on a ground-truth reference, which is often not available in rendering applications. While general purpose no-reference image quality assessment is a difficult problem, we show in a subjective study that the performance of a dedicated no-reference metric as presented in this paper can match the state-of-the-art metrics that do require a reference. This level of predictive power is achieved exploiting information about the underlying synthetic scene (e.g., 3D surfaces, textures) instead of merely considering color, and training our learning framework with typical rendering artifacts. We show that our method successfully detects various non-trivial types of artifacts such as noise and clamping bias due to insufficient virtual point light sources, and shadow map discretization artifacts. We also briefly discuss an inpainting method for automatic correction of detected artifacts.