Search Results

Now showing 1 - 10 of 10
  • Item
    2009 Eurographics Symposium on Parallel Graphics and Visualization
    (The Eurographics Association and Blackwell Publishing Ltd, 2009) Comba, Joao; Daniel, Weiskopf; Debattista, Kurt
  • Item
    High Dynamic Range Imaging and Low Dynamic Range Expansion for Generating HDR Content
    (The Eurographics Association, 2009) Banterle, Francesco; Debattista, Kurt; Artusi, Alessandro; Pattanaik, Sumanta; Myszkowski, Karol; Ledda, Patrick; Bloj, Marina; Chalmers, Alan; M. Pauly and G. Greiner
    In the last few years, researchers in the field of High Dynamic Range (HDR) Imaging have focused on providing tools for expanding Low Dynamic Range (LDR) content for the generation of HDR images due to the growing popularity of HDR in applications, such as photography and rendering via Image-Based Lighting, and the imminent arrival of HDR displays to the consumer market. LDR content expansion is required due to the lack of fast and reliable consumer level HDR capture for still images and videos. Furthermore, LDR content expansion, will allow the re-use of legacy LDR stills, videos and LDR applications created, over the last century and more, to be widely available. The use of certain LDR expansion methods, those that are based on the inversion of tone mapping operators, has made it possible to create novel compression algorithms that tackle the problem of the size of HDR content storage, which remains one of the major obstacles to be overcome for the adoption of HDR. These methods are used in conjunction with traditional LDR compression methods and can evolve accordingly. The goal of this report is to provide a comprehensive overview on HDR Imaging, and an in depth review on these emerging topics.
  • Item
    Cost Prediction Maps for Global Illumination
    (The Eurographics Association, 2005) Gillibrand, Richard; Debattista, Kurt; Chalmers, Alan; Louise M. Lever and Mary McDerby
    There is a growing demand from the media industry, including computer games, virtual reality and simulation, for increasing realism in real-time for their computer generated images. Despite considerable advances in processing power and graphics hardware, increasing scene complexity means that it is still not possible to achieve high fidelity computer graphics in a reasonable, let alone real, time on a single computer. Cost prediction is a technique which acquires knowledge of computational complexity within the rendering pipeline as the computation progresses and then uses this to best allocate the available resources to achieve the highest perceptual quality of an image in a time constrained system. In this paper we describe a method of acquiring computational cost complexity knowledge within a high fidelity graphics environment. This cost map may be used in combination with other perceptually derived maps to control a selective renderer in order to achieve the best perceptual quality results for a user specified frame-rate.
  • Item
    A Psychophysical Evaluation of Inverse Tone Mapping Techniques
    (The Eurographics Association and Blackwell Publishing Ltd, 2009) Banterle, Francesco; Ledda, Patrick; Debattista, Kurt; Bloj, Marina; Artusi, Alessandro; Chalmers, Alan
    In recent years inverse tone mapping techniques have been proposed for enhancing low-dynamic range (LDR) content for a high-dynamic range (HDR) experience on HDR displays, and for image based lighting. In this paper, we present a psychophysical study to evaluate the performance of inverse (reverse) tone mapping algorithms. Some of these techniques are computationally expensive because they need to resolve quantization problems that can occur when expanding an LDR image. Even if they can be implemented efficiently on hardware, the computational cost can still be high. An alternative is to utilize less complex operators; although these may suffer in terms of accuracy. Our study investigates, firstly, if a high level of complexity is needed for inverse tone mapping and, secondly, if a correlation exists between image content and quality. Two main applications have been considered: visualization on an HDR monitor and image-based lighting.
  • Item
    The Virtual Reconstruction and Daylight Illumination of the Panagia Angeloktisti
    (The Eurographics Association, 2009) Happa, Jassim; Artusi, Alessandro; Dubla, Piotr; Bashford-Rogers, Tom; Debattista, Kurt; Hulusic, Vedad; Chalmers, Alan; Kurt Debattista and Cinzia Perlingieri and Denis Pitzalis and Sandro Spina
    High-fidelity virtual reconstructions can be used as accurate 3D representations of historical environments. After modelling the site to high precision, physically-based and historically correct light models must be implemented to complete an authentic visualisation. Sunlight has a major visual impact on a site; from directly lit areas to sections in deep shadow. The scene illumination also changes substantially at different times of the day. In this paper we present a virtual reconstruction of the Panagia Angeloktisti; a Byzantine church on Cyprus. We investigate lighting simulations of the church at different times of the day, making use of Image-Based Lighting, using High Dynamic Range Environment Maps of photographs and interpolated spectrophotometer data collected on site. Furthermore, the paper also explores the benefits and disadvantages of employing unbiased rendering methods such as Path Tracing and Metropolis Light Transport for cultural heritage applications.
  • Item
    High Dynamic Range Imaging and Low Dynamic Range Expansion for Generating HDR Content
    (The Eurographics Association and Blackwell Publishing Ltd, 2009) Banterle, Francesco; Debattista, Kurt; Artusi, Alessandro; Pattanaik, Sumanta; Myszkowski, Karol; Ledda, Patrick; Chalmers, Alan
    In the last few years, researchers in the field of High Dynamic Range (HDR) Imaging have focused on providing tools for expanding Low Dynamic Range (LDR) content for the generation of HDR images due to the growing popularity of HDR in applications, such as photography and rendering via Image-Based Lighting, and the imminent arrival of HDR displays to the consumer market. LDR content expansion is required due to the lack of fast and reliable consumer level HDR capture for still images and videos. Furthermore, LDR content expansion, will allow the re-use of legacy LDR stills, videos and LDR applications created, over the last century and more, to be widely available. The use of certain LDR expansion methods, those that are based on the inversion of Tone Mapping Operators (TMOs), has made it possible to create novel compression algorithms that tackle the problem of the size of HDR content storage, which remains one of the major obstacles to be overcome for the adoption of HDR. These methods are used in conjunction with traditional LDR compression methods and can evolve accordingly. The goal of this report is to provide a comprehensive overview on HDR Imaging, and an in depth review on these emerging topics.
  • Item
    Wait-Free Shared-Memory Irradiance Cache
    (The Eurographics Association, 2009) Dubla, Piotr; Debattista, Kurt; Santos, Luis Paulo; Chalmers, Alan; Kurt Debattista and Daniel Weiskopf and Joao Comba
    The irradiance cache (IC) is an acceleration data structure which caches indirect diffuse irradiance values within the context of a ray tracing algorithm. In multi-threaded shared memory parallel systems the IC must be shared among rendering threads in order to achieve high efficiency levels. Since all threads read and write from it an access control mechanism is required, which ensures that the data structure is not corrupted. Besides assuring correct accesses to the IC this access mechanism must incur minimal overheads such that performance is not compromised. In this paper we propose a new wait-free access mechanism to the shared irradiance cache. Wait-free data struc- tures, unlike traditional access control mechanisms, do not make use of any blocking or busy waiting, avoiding most serialisation and reducing contention. We compare this technique with two other classical approaches: a lock based mechanism and a local write technique, where each thread maintains its own cache of locally evaluated irradiance values. We demonstrate that the wait-free approach significantly reduces synchronisation overheads compared to the two other approaches and that it increases data sharing over the local copy technique. This is, to the extent of our knowledge, the first work explicitly addressing access to a shared IC; this problem is becoming more and more relevant with the advent of multicore systems and the ever increasing number of processors within these systems.
  • Item
    Accelerating the Irradiance Cache through Parallel Component-Based Rendering
    (The Eurographics Association, 2006) Debattista, Kurt; Santos, Luís Paulo; Chalmers, Alan; Alan Heirich and Bruno Raffin and Luis Paulo dos Santos
    The irradiance cache is an acceleration data structure which caches indirect diffuse samples within the framework of a distributed ray-tracing algorithm. Previously calculated values can be stored and reused in future calculations, resulting in an order of magnitude improvement in computational performance. However, the irradiance cache is a shared data structure and so it is notoriously difficult to parallelise over a distributed parallel system. The hurdle to overcome is when and how to share cached samples. This sharing incurs communication overheads and yet must happen frequently to minimise cache misses and thus maximise the performance of the cache. We present a novel component-based parallel algorithm implemented on a cluster of computers, whereby the indirect diffuse calculations are calculated on a subset of nodes in the cluster. This method exploits the inherent spatial coherent nature of the irradiance cache; by reducing the set of nodes amongst which cached values must be shared, the sharing frequency can be kept high, thus decreasing both communication overheads and cache misses. We demonstrate how our new parallel rendering algorithm significantly outperforms traditional methods of distributing the irradiance cache.
  • Item
    Time-constrained High-fidelity Rendering on Local Desktop Grids
    (The Eurographics Association, 2009) Aggarwal, Vibhor; Debattista, Kurt; Dubla, Piotr; Bashford-Rogers, Thomas; Chalmers, Alan; Kurt Debattista and Daniel Weiskopf and Joao Comba
    Parallel computing has been frequently used for reducing the rendering time of high-fidelity images, since the generation of such images has a high computational cost. Numerous algorithms have been proposed for parallel rendering but they primarily focus on utilising shared memory machines or dedicated distributed clusters. A local desktop grid, composed of arbitrary computational resources connected to a network such as those in a lab or an enterprise, provides an inexpensive alternative to dedicated clusters. The computational power offered by such a desktop grid is time-variant as the resources are not dedicated. This paper presents fault-tolerant algorithms for rendering high-fidelity images on a desktop grid within a given time-constraint. Due to the dynamic nature of resources, the task assignment does not rely on subdividing the image into tiles. Instead, a progressive approach is used that encompasses aspects of the entire image for each task and ensures that the time-constraints are met. Traditional reconstruction techniques are used to calculate the missing data. This approach is designed to avoid redundancy to maintain time-constraints. As a further enhancement, the algorithm decomposes the computation into components representing different tasks to achieve better visual quality considering the time-constraint and variable resources. This paper illustrates how the component-based approach maintains a better visual fidelity considering a given time-constraint while making use of volatile computational resources.
  • Item
    High-Fidelity Rendering of Animations on the Grid: A Case Study
    (The Eurographics Association, 2008) Aggarwal, Vibhor; Chalmers, Alan; Debattista, Kurt; Jean M. Favre and Kwan-Liu Ma
    Generation of physically-based rendered animations is a computationally expensive process, often taking many hours to complete. Parallel rendering, on shared memory machines and small to medium clusters, is often em- ployed to improve overall rendering times. Massive parallelism is possible using Grid computing. However, since the Grid is a multi-user environment with a large number of nodes potentially separated by substantial network distances; communication should be kept minimum. While for some rendering algorithms running animations on the Grid may be a simple task of assigning an individual frame for each processor, certain acceleration data structures, such as irradiance caching require different approaches. The irradiance cache, which caches the in- direct diffuse samples for interpolation of indirect lighting calculations, may be used to significantly reduce the computational requirements when generating high-fidelity animations. Parallel solutions for irradiance caching using shared memory or message passing are not ideal for Grid computing due to the communication overhead and must be adapted for this highly parallel environment. This paper presents a case study on rendering of high- fidelity animations using a two-pass approach by adapting the irradiance cache algorithm for parallel rendering using Grid computing. This approach exploits the temporal coherence between animation frames to significantly gain speed-up and enhance visual quality. The key feature of our approach is that it does not use any additional data structure and can thus be used with any irradiance cache or similar acceleration mechanism for rendering on the Grid.