16 results
Search Results
Now showing 1 - 10 of 16
Item High Dynamic Range Techniques in Graphics: from Acquisition to Display(The Eurographics Association, 2005) Goesele, Michael; Heidrich, Wolfgang; Höfflinger, Bernd; Krawczyk, Grzegorz; Myszkowski, Karol; Trentacoste, Matthew; Ming Lin and Celine LoscosThis course is motivated by tremendous progress in the development and accessibility of high dynamic range technology (HDR) that happened just recently, which creates many interesting opportunities and challenges in graphics. The course presents a complete pipeline for HDR image and video processing from acquisition, through compression and quality evaluation, to display. Also, successful examples of the use of HDR technology in research setups and industrial applications are provided. Whenever needed relevant background information on human perception is given which enables better understanding of the design choices behind the discussed algorithms and HDR equipment.Item Perception-driven Accelerated Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2017) Weier, Martin; Stengel, Michael; Roth, Thorsten; Didyk, Piotr; Eisemann, Elmar; Eisemann, Martin; Grogorick, Steve; Hinkenjann, André; Kruijff, Ernst; Magnor, Marcus; Myszkowski, Karol; Slusallek, Philipp; Victor Ostromoukov and Matthias ZwickerAdvances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real-time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi-view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.Item High Dynamic Range Imaging and Low Dynamic Range Expansion for Generating HDR Content(The Eurographics Association, 2009) Banterle, Francesco; Debattista, Kurt; Artusi, Alessandro; Pattanaik, Sumanta; Myszkowski, Karol; Ledda, Patrick; Bloj, Marina; Chalmers, Alan; M. Pauly and G. GreinerIn the last few years, researchers in the field of High Dynamic Range (HDR) Imaging have focused on providing tools for expanding Low Dynamic Range (LDR) content for the generation of HDR images due to the growing popularity of HDR in applications, such as photography and rendering via Image-Based Lighting, and the imminent arrival of HDR displays to the consumer market. LDR content expansion is required due to the lack of fast and reliable consumer level HDR capture for still images and videos. Furthermore, LDR content expansion, will allow the re-use of legacy LDR stills, videos and LDR applications created, over the last century and more, to be widely available. The use of certain LDR expansion methods, those that are based on the inversion of tone mapping operators, has made it possible to create novel compression algorithms that tackle the problem of the size of HDR content storage, which remains one of the major obstacles to be overcome for the adoption of HDR. These methods are used in conjunction with traditional LDR compression methods and can evolve accordingly. The goal of this report is to provide a comprehensive overview on HDR Imaging, and an in depth review on these emerging topics.Item Mapping Images to Target Devices: Spatial, Temporal, Stereo, Tone, and Color(The Eurographics Association, 2012) Banterle, Francesco; Artusi, Alessandro; Aydin, Tunc O.; Didyk, Piotr; Eisemann, Elmar; Gutierrez, Diego; Mantiuk, Rafael; Myszkowski, Karol; Ritschel, Tobias; Renato Pajarola and Michela SpagnuoloRetargeting is a process through which an image or a video is adapted from the display device for which it was meant (target display) to another one (retarget display). The retarget display can have different features from the target one such as: dynamic range, discretization levels, color gamut, multi-view (3D), refresh rate, spatial resolution, etc. This tutorial presents the latest solutions and techniques for retargeting images along various dimensions (such as dynamic range, colors, temporal and spatial resolutions) and offers for the first time a much-needed holistic view of the field. This includes how to measure and analyze the changes applied to an image/video in terms of quality using both (subjective) psychophysical experiments and (objective) computational metrics.Item Manipulating Refractive and Reflective Binocular Disparity(The Eurographics Association and John Wiley and Sons Ltd., 2014) Dabala, Lukasz; Kellnhofer, Petr; Ritschel, Tobias; Didyk, Piotr; Templin, Krzysztof; Myszkowski, Karol; Rokita, P.; Seidel, Hans-Peter; B. Levy and J. KautzPresenting stereoscopic content on 3D displays is a challenging task, usually requiring manual adjustments. A number of techniques have been developed to aid this process, but they account for binocular disparity of surfaces that are diffuse and opaque only. However, combinations of transparent as well as specular materials are common in the real and virtual worlds, and pose a significant problem. For example, excessive disparities can be created which cannot be fused by the observer. Also, multiple stereo interpretations become possible, e. g., for glass, that both reflects and refracts, which may confuse the observer and result in poor 3D experience. In this work, we propose an efficient method for analyzing and controlling disparities in computer-generated images of such scenes where surface positions and a layer decomposition are available. Instead of assuming a single per-pixel disparity value, we estimate all possibly perceived disparities at each image location. Based on this representation, we define an optimization to find the best per-pixel camera parameters, assuring that all disparities can be easily fused by a human. A preliminary perceptual study indicates, that our approach combines comfortable viewing with realistic depiction of typical specular scenes.Item NoRM: No-Reference Image Quality Metric for Realistic Image Synthesis(The Eurographics Association and John Wiley and Sons Ltd., 2012) Herzog, Robert; CadÃk, Martin; Aydin, Tunç O.; Kim, Kwang In; Myszkowski, Karol; Seidel, Hans-Peter; P. Cignoni and T. ErtlSynthetically generating images and video frames of complex 3D scenes using some photo-realistic rendering software is often prone to artifacts and requires expert knowledge to tune the parameters. The manual work required for detecting and preventing artifacts can be automated through objective quality evaluation of synthetic images. Most practical objective quality assessment methods of natural images rely on a ground-truth reference, which is often not available in rendering applications. While general purpose no-reference image quality assessment is a difficult problem, we show in a subjective study that the performance of a dedicated no-reference metric as presented in this paper can match the state-of-the-art metrics that do require a reference. This level of predictive power is achieved exploiting information about the underlying synthetic scene (e.g., 3D surfaces, textures) instead of merely considering color, and training our learning framework with typical rendering artifacts. We show that our method successfully detects various non-trivial types of artifacts such as noise and clamping bias due to insufficient virtual point light sources, and shadow map discretization artifacts. We also briefly discuss an inpainting method for automatic correction of detected artifacts.Item Perceptually-motivated Real-time Temporal Upsampling of 3D Content for High-refresh-rate Displays(The Eurographics Association and Blackwell Publishing Ltd, 2010) Didyk, Piotr; Eisemann, Elmar; Ritschel, Tobias; Myszkowski, Karol; Seidel, Hans-PeterHigh-refresh-rate displays (e. g., 120 Hz) have recently become available on the consumer market and quickly gain on popularity. One of their aims is to reduce the perceived blur created by moving objects that are tracked by the human eye. However, an improvement is only achieved if the video stream is produced at the same high refresh rate (i. e. 120 Hz). Some devices, such as LCD TVs, solve this problem by converting low-refresh-rate content (i. e. 50 Hz PAL) into a higher temporal resolution (i. e. 200 Hz) based on two-dimensional optical flow.In our approach, we will show how rendered three-dimensional images produced by recent graphics hardware can be up-sampled more efficiently resulting in higher quality at the same time. Our algorithm relies on several perceptual findings and preserves the naturalness of the original sequence. A psychophysical study validates our approach and illustrates that temporally up-sampled video streams are preferred over the standard low-rate input by the majority of users. We show that our solution improves task performance on high-refresh-rate displays.Item Scalable Remote Rendering with Depth and Motion-flow Augmented Streaming(The Eurographics Association and Blackwell Publishing Ltd., 2011) Paja, Dawid; Herzog, Robert; Eisemann, Elmar; Myszkowski, Karol; Seidel, Hans-Peter; M. Chen and O. DeussenIn this paper, we focus on efficient compression and streaming of frames rendered from a dynamic 3D model. Remote rendering and on-the-fly streaming become increasingly attractive for interactive applications. Data is kept confidential and only images are sent to the client. Even if the client's hardware resources are modest, the user can interact with state-of-the-art rendering applications executed on the server. Our solution focuses on augmented video information, e.g., by depth, which is key to increase robustness with respect to data loss, image reconstruction, and is an important feature for stereo vision and other client-side applications. Two major challenges arise in such a setup. First, the server workload has to be controlled to support many clients, second the data transfer needs to be efficient. Consequently, our contributions are twofold. First, we reduce the server-based computations by making use of sparse sampling and temporal consistency to avoid expensive pixel evaluations. Second, our data-transfer solution takes limited bandwidths into account, is robust to information loss, and compression and decompression are efficient enough to support real-time interaction. Our key insight is to tailor our method explicitly for rendered 3D content and shift some computations on client GPUs, to better balance the server/client workload. Our framework is progressive, scalable, and allows us to stream augmented high-resolution (e.g., HDready) frames with small bandwidth on standard hardware.Item Render2MPEG: A Perception-based Framework Towards Integrating Rendering and Video Compression(The Eurographics Association and Blackwell Publishing Ltd, 2008) Herzog, Robert; Kinuwaki, Shinichi; Myszkowski, Karol; Seidel, Hans-PeterCurrently 3D animation rendering and video compression are completely independent processes even if rendered frames are streamed on-the-fly within a client-server platform. In such scenario, which may involve time-varying transmission bandwidths and different display characteristics at the client side, dynamic adjustment of the rendering quality to such requirements can lead to a better use of server resources. In this work, we present a framework where the renderer and MPEG codec are coupled through a straightforward interface that provides precise motion vectors from the rendering side to the codec and perceptual error thresholds for each pixel in the opposite direction. The perceptual error thresholds take into account bandwidth-dependent quantization errors resulting from the lossy com-pression as well as image content-dependent luminance and spatial contrast masking. The availability of the discrete cosine transform (DCT) coefficients at the codec side enables to use advanced models of the human visual system (HVS) in the perceptual error threshold derivation without incurring any significant cost. Those error thresholds are then used to control the rendering quality and make it well aligned with the compressed stream quality. In our prototype system we use the lightcuts technique developed by Walter et al., which we enhance to handle dynamic image sequences, and an MPEG-2 implementation. Our results clearly demonstrate many advantages of coupling the rendering with video compression in terms of faster rendering. Furthermore, temporally coherent rendering leads to a reduction of temporal artifacts.Item Global Illumination for Interactive Applications and High-Quality Animations(Eurographics Association, 2002) Damez, Cyrille; Dmitriev, Kirill; Myszkowski, KarolOne of the main obstacles to the use of global illumination in image synthesis industry is the considerable amount of time needed to compute the lighting for a single image. Until now, this computational cost has prevented its widespread use in interactive design applications as well as in computer animations. Several algorithms have been proposed to address these issues. In this report, we present a much needed survey and classification of the most up-to-date of these methods. Roughly, two families of algorithms can be distinguished. The first one aims at providing interactive feedback for lighting design applications. The second one gives higher priority to the quality of results, and therefore relies on offline computations. Recently, impressive advances have been made in both categories. Indeed, with the steady progress of computing resources and graphics hardware, and the current trend of new algorithms for animated scenes, common use of global illumination seems closer than ever.