Search Results

Now showing 1 - 10 of 43
  • Item
    High Dynamic Range Techniques in Graphics: from Acquisition to Display
    (The Eurographics Association, 2005) Goesele, Michael; Heidrich, Wolfgang; Höfflinger, Bernd; Krawczyk, Grzegorz; Myszkowski, Karol; Trentacoste, Matthew; Ming Lin and Celine Loscos
    This course is motivated by tremendous progress in the development and accessibility of high dynamic range technology (HDR) that happened just recently, which creates many interesting opportunities and challenges in graphics. The course presents a complete pipeline for HDR image and video processing from acquisition, through compression and quality evaluation, to display. Also, successful examples of the use of HDR technology in research setups and industrial applications are provided. Whenever needed relevant background information on human perception is given which enables better understanding of the design choices behind the discussed algorithms and HDR equipment.
  • Item
    State of the Art in Global Illumination for Interactive Applications and High-quality Animations
    (Blackwell Publishers, Inc and the Eurographics Association, 2003) Damez, Cyrille; Dmitriev, Kirill; Myszkowski, Karol
    Global illumination algorithms are regarded as computationally intensive. This cost is a practical problem when producing animations or when interactions with complex models are required. Several algorithms have been proposed to address this issue. Roughly, two families of methods can be distinguished. The first one aims at providing interactive feedback for lighting design applications. The second one gives higher priority to the quality of results, and therefore relies on offline computations. Recently, impressive advances have been made in both categories. In this report, we present a survey and classification of the most up-to-date of these methods.ACM CSS: I.3.7 Computer Graphics-Three-Dimensional Graphics and Realism
  • Item
    Optimizing Disparity for Motion in Depth
    (The Eurographics Association and Blackwell Publishing Ltd., 2013) Kellnhofer, Petr; Ritschel, Tobias; Myszkowski, Karol; Seidel, Hans-Peter; Nicolas Holzschuch and Szymon Rusinkiewicz
    Beyond the careful design of stereo acquisition equipment and rendering algorithms, disparity post-processing has recently received much attention, where one of the key tasks is to compress the originally large disparity range to avoid viewing discomfort. The perception of dynamic stereo content however, relies on reproducing the full disparity-time volume that a scene point undergoes in motion. This volume can be strongly distorted in manipulation, which is only concerned with changing disparity at one instant in time, even if the temporal coherence of that change is maintained. We propose an optimization to preserve stereo motion of content that was subject to an arbitrary disparity manipulation, based on a perceptual model of temporal disparity changes. Furthermore, we introduce a novel 3D warping technique to create stereo image pairs that conform to this optimized disparity map. The paper concludes with perceptual studies of motion reproduction quality and task performance in a simple game, showing how our optimization can achieve both viewing comfort and faithful stereo motion.
  • Item
    Lightness Perception in Tone Reproduction for High Dynamic Range Images
    (The Eurographics Association and Blackwell Publishing, Inc, 2005) Krawczyk, Grzegorz; Myszkowski, Karol; Seidel, Hans-Peter
  • Item
    An Efficient Spatio-Temporal Architecture for Animation Rendering
    (The Eurographics Association, 2003) Havran, Vlastimil; Damez, Cyrille; Myszkowski, Karol; Seidel, Hans-Peter; Philip Dutre and Frank Suykens and Per H. Christensen and Daniel Cohen-Or
    Producing high quality animations featuring rich object appearance and compelling lighting effects is very time consuming using traditional frame-by-frame rendering systems. In this paper we present a rendering architecture for computing multiple frames at once by exploiting the coherencebetween image samples in the temporal domain. For each sample representing a given point in the scene we update its view-dependent components for each frame and add its contribution to pixels identified through the compensation of camera and object motion. This leads naturally to a high quality motion blur and significantly reduces the cost of illumination computations. The required visibility information is provided using a custom ray tracing acceleration data structure for multiple frames simultaneously. We demonstrate that precise and costly global illumination techniques such as bidirectional path tracing become affordable in this rendering architecture.
  • Item
    Virtual Passepartouts
    (The Eurographics Association, 2012) Ritschel, Tobias; Templin, Krzysztof; Myszkowski, Karol; Seidel, Hans-Peter; Paul Asente and Cindy Grimm
    In traditional media, such as photography and painting, a cardboard sheet with a cutout (called passepartout) is frequently placed on top of an image. One of its functions is to increase the depth impression via the ''looking-through-a-window'' metaphor. This paper shows how an improved 3D effect can be achieved by using a virtual passepartout: a 2D framing that selectively masks the 3D shape and leads to additional occlusion events between the virtual world and the frame. We introduce a pipeline to design virtual passepartouts interactively as a simple post-process on RGB images augmented with depth information. Additionally, an automated approach finds the optimal virtual passepartout for a given scene. Virtual passepartouts can be used to enhance depth depiction in images and videos with depth information, renderings, stereo images and the fabrication of physical passepartouts
  • Item
    Temporally Coherent Irradiance Caching for High Quality Animation Rendering
    (The Eurographics Association and Blackwell Publishing, Inc, 2005) Smyk,, Miloslaw; Kinuwaki, Shin-ichi; Durikovic Roman; Myszkowski, Karol
  • Item
    Consistent Filtering of Videos and Dense Light-Fields Without Optic-Flow
    (The Eurographics Association, 2019) Shekhar, Sumit; Semmo, Amir; Trapp, Matthias; Tursun, Okan; Pasewaldt, Sebastian; Myszkowski, Karol; Döllner, Jürgen; Schulz, Hans-Jörg and Teschner, Matthias and Wimmer, Michael
    A convenient post-production video processing approach is to apply image filters on a per-frame basis. This allows the flexibility of extending image filters-originally designed for still images-to videos. However, per-image filtering may lead to temporal inconsistencies perceived as unpleasant flickering artifacts, which is also the case for dense light-fields due to angular inconsistencies. In this work, we present a method for consistent filtering of videos and dense light-fields that addresses these problems. Our assumption is that inconsistencies-due to per-image filtering-are represented as noise across the image sequence. We thus perform denoising across the filtered image sequence and combine per-image filtered results with their denoised versions. At this, we use saliency based optimization weights to produce a consistent output while preserving the details simultaneously. To control the degree-of-consistency in the final output, we implemented our approach in an interactive real-time processing framework. Unlike state-of-the-art inconsistency removal techniques, our approach does not rely on optic-flow for enforcing coherence. Comparisons and a qualitative evaluation indicate that our method provides better results over state-of-the-art approaches for certain types of filters and applications.
  • Item
    Selecting Texture Resolution Using a Task-specific Visibility Metric
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Wolski, Krzysztof; Giunchi, Daniele; Kinuwaki, Shinichi; Didyk, Piotr; Myszkowski, Karol; Steed, Anthony; Mantiuk, Rafal K.; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    In real-time rendering, the appearance of scenes is greatly affected by the quality and resolution of the textures used for image synthesis. At the same time, the size of textures determines the performance and the memory requirements of rendering. As a result, finding the optimal texture resolution is critical, but also a non-trivial task since the visibility of texture imperfections depends on underlying geometry, illumination, interactions between several texture maps, and viewing positions. Ideally, we would like to automate the task with a visibility metric, which could predict the optimal texture resolution. To maximize the performance of such a metric, it should be trained on a given task. This, however, requires sufficient user data which is often difficult to obtain. To address this problem, we develop a procedure for training an image visibility metric for a specific task while reducing the effort required to collect new data. The procedure involves generating a large dataset using an existing visibility metric followed by refining that dataset with the help of an efficient perceptual experiment. Then, such a refined dataset is used to retune the metric. This way, we augment sparse perceptual data to a large number of per-pixel annotated visibility maps which serve as the training data for application-specific visibility metrics. While our approach is general and can be potentially applied for different image distortions, we demonstrate an application in a game-engine where we optimize the resolution of various textures, such as albedo and normal maps.
  • Item
    Perception-driven Accelerated Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Weier, Martin; Stengel, Michael; Roth, Thorsten; Didyk, Piotr; Eisemann, Elmar; Eisemann, Martin; Grogorick, Steve; Hinkenjann, André; Kruijff, Ernst; Magnor, Marcus; Myszkowski, Karol; Slusallek, Philipp; Victor Ostromoukov and Matthias Zwicker
    Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real-time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi-view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.