7 results
Search Results
Now showing 1 - 7 of 7
Item Light Clustering for Dynamic Image Based Lighting(The Eurographics Association, 2012) Staton, Sam; Debattista, Kurt; Bashford-Rogers, Thomas; Chalmers, Alan; Hamish Carr and Silvester CzannerHigh Dynamic Range (HDR) imagery has made it possible to relight virtual objects accurately with the captured lighting. This technique, called Image Based Lighting (IBL), is a commonly used to render scenes using real-world illumination. IBL has mostly been limited to static scenes due to limitations of HDR capture. However, recently there has been progress on developing devices which can capture HDR video sequences. These can be also be used to light virtual environments dynamically. If existing IBL algorithms are applied to this dynamic problem, temporal artifacts viewed as flickering can often arise due to samples being selected from different parts of the environment in consecutive frames. In this paper we present a method for efficiently rendering virtual scenarios with such captured sequences based on spatial and temporal clustering. Our proposed Dynamic IBL (DIBL) method improves temporal quality by suppressing flickering, and we demonstrate the application to fast previews of scenes lit by video environment maps.Item Backwards Compatible JPEG Stereoscopic High Dynamic Range Imaging(The Eurographics Association, 2012) Selmanovic, Elmedin; Debattista, Kurt; Bashford-Rogers, Thomas; Chalmers, Alan; Hamish Carr and Silvester CzannerIn this paper we introduce Stereoscopic High Dynamic Range (SHDR) Imagery which is a novel tecnique that combines high dynamic range imaging and stereoscopy. Stereoscopic imaging captures two images representing the views of both eyes and allows for better depth perception. High dynamic range (HDR) imaging is an emerging technology which allows the capture, storage and display of real world lighting as opposed to traditional imagery which only captures a restricted range of light due to limitation in hardware capture and displays. HDR provides better contrast and more natural looking scenes. One of the main challenges that needs to be overcome for SHDR to be successful is an efficient storage format that compresses the very large sizes obtained by SHDR if left uncompressed; stereoscopic imaging requires the storage of two images and uncompressed HDR requires the storage of a floating point value per colour channel per pixel. In this paper we present a number of SHDR compression methods that are backward compatible with traditional JPEG, stereo JPEG and JPEG-HDR. The proposed methods can encode SHDR content to little more than that of a traditional LDR image and the backward compatibility property encourages early adopters to adopt the format since their content will still be viewable by any of the legacy viewers.Item Fast Scalable k-NN Computation for Very Large Point Clouds(The Eurographics Association, 2012) Spina, Sandro; Debattista, Kurt; Bugeja, Keith; Chalmers, Alan; Hamish Carr and Silvester CzannerThe process of reconstructing virtual representations of large real-world sites is traditionally carried out through the use of laser scanning technology. Recent advances in these technologies led to improvements in precision and accuracy and higher sampling rates. State of the art laser scanners are capable of acquiring around a million points per second, generating enormous point cloud data sets. These data sets are usually cleaned through the application of numerous post-processing algorithms, like normal determination, clustering and noise removal. A common factor in these algorithms is the recurring need for the computation of point neighborhoods, usually by applying algorithms to compute the k-nearest neighbours of each point. The majority of these algorithms work under the assumption that the data sets operated on can fit in main memory, while others take into account the size of the data sets and are thus designed to keep data on disk. We present a hybrid approach which exploits the spatial locality of point clusters in the point cloud and loads them in system memory on demand by taking advantage of paged virtual memory in modern operating systems. In this way, we maximize processor utilization while keeping I/O overheads to a minimum. We evaluate our approach on point cloud sizes ranging from 50K to 333M points on machines with 1GB, 2GB, 4GB and 8GB of system memory.Item Acoustic Rendering and Auditory–Visual Cross-Modal Perception and Interaction(The Eurographics Association and Blackwell Publishing Ltd., 2012) Hulusic, Vedad; Harvey, Carlo; Debattista, Kurt; Tsingos, Nicolas; Walker, Steve; Howard, David; Chalmers, Alan; Holly Rushmeier and Oliver DeussenIn recent years research in the three-dimensional sound generation field has been primarily focussed upon new applications of spatialized sound. In the computer graphics community the use of such techniques is most commonly found being applied to virtual, immersive environments. However, the field is more varied and diverse than this and other research tackles the problem in a more complete, and computationally expensive manner. Furthermore, the simulation of light and sound wave propagation is still unachievable at a physically accurate spatio-temporal quality in real time. Although the Human Visual System (HVS) and the Human Auditory System (HAS) are exceptionally sophisticated, they also contain certain perceptional and attentional limitations. Researchers, in fields such as psychology, have been investigating these limitations for several years and have come up with findings which may be exploited in other fields. This paper provides a comprehensive overview of the major techniques for generating spatialized sound and, in addition, discusses perceptual and cross-modal influences to consider. We also describe current limitations and provide an in-depth look at the emerging topics in the field.Item Cultural Heritage Predictive Rendering(The Eurographics Association and Blackwell Publishing Ltd., 2012) Happa, Jassim; Bashford-Rogers, Tom; Wilkie, Alexander; Artusi, Alessandro; Debattista, Kurt; Chalmers, Alan; Holly Rushmeier and Oliver DeussenHigh‐fidelity rendering can be used to investigate Cultural Heritage (CH) sites in a scientifically rigorous manner. However, a high degree of realism in the reconstruction of a CH site can be misleading insofar as it can be seen to imply a high degree of certainty about the displayed scene—which is frequently not the case, especially when investigating the past. So far, little effort has gone into adapting and formulating a Predictive Rendering pipeline for CH research applications. In this paper, we first discuss the goals and the workflow of CH reconstructions in general, as well as those of traditional Predictive Rendering. Based on this, we then propose a research framework for CH research, which we refer to as ‘Cultural Heritage Predictive Rendering’ (CHPR). This is an extension to Predictive Rendering that introduces a temporal component and addresses uncertainty that is important for the scene’s historical interpretation. To demonstrate these concepts, two example case studies are detailed.High‐fidelity rendering can be used to investigate Cultural Heritage (CH) sites in a scientifically rigorous manner. However, a high degree of realism in the reconstruction of a CH site can be misleading insofar as it can be seen to imply a high degree of certainty about the displayed scene‐which is frequently not the case, especially when investigating the past. So far, little effort has gone into adapting and formulating a Predictive Rendering pipeline for CH research applications. In this paper, we first discuss the goals and the workflow of CH reconstructions in general, as well as those of traditional Predictive Rendering. Based on this, we then propose a research framework for CH research, which we refer to as ‘Cultural Heritage Predictive Rendering’ (CHPR).Item A Significance Cache for Accelerating Global Illumination(The Eurographics Association and Blackwell Publishing Ltd., 2012) Bashford-Rogers, Thomas; Debattista, Kurt; Chalmers, Alan; Holly Rushmeier and Oliver DeussenRendering using physically based methods requires substantial computational resources. Most methods that are physically based use straightforward techniques that may excessively compute certain types of light transport, while ignoring more important ones. Importance sampling is an effective and commonly used technique to reduce variance in such methods. Most current approaches for physically based rendering based on Monte Carlo methods sample the BRDF and cosine term, but are unable to sample the indirect illumination as this is the term that is being computed. Knowledge of the incoming illumination can be especially useful in the case of hard to find light paths, such as caustics or scenes which rely primarily on indirect illumination. To facilitate the determination of such paths, we propose a caching scheme which stores important directions, and is analytically sampled to calculate important paths. Results show an improvement over BRDF sampling and similar illumination importance sampling.Rendering using physically based methods requires substantial computational resources. Most methods that are physically based use straightforward techniques that may excessively compute certain types of light transport, while ignoring more important ones. Importance sampling is an effective and commonly used technique to reduce variance in such methods. Most current approaches for physically based rendering based on Monte Carlo methods sample the BRDF and cosine term, but are unable to sample the indirect illumination as this is the term that is being computed. Knowledge of the incoming illumination can be especially useful in the case of hard to find light paths, such as caustics or scenes which rely primarily on indirect illumination. To facilitate the determination of such paths, we propose a caching scheme which stores important directions, and is analytically sampled to calculate important paths.Item Time-constrained Animation Rendering on Desktop Grids(The Eurographics Association, 2012) Aggarwal, Vibhor; Debattista, Kurt; Bashford-Rogers, Thomas; Chalmers, Alan; Hank Childs and Torsten Kuhlen and Fabio MartonThe computationally intensive nature of high-fidelity rendering has led to a dependence on parallel infrastructures for generating animations. However, such an infrastructure is expensive thereby restricting easy access to highfidelity animations to organisations which can afford such resources. A desktop grid formed by aggregating idle resources in an institution is an inexpensive alternative, but it is inherently unreliable due to the non-dedicated nature of the architecture. A naive approach to employing desktop grids for rendering animations could lead to potential inconsistencies in the quality of the rendered animation as the available computational performance fluctuates. Hence, fault-tolerant algorithms are required for efficiently utilising a desktop grid. This paper presents a novel fault-tolerant rendering algorithm for generating high-fidelity animations in a user-defined time-constraint. Time-constrained computation provides an elegant way of harnessing desktop grids as otherwise makespan cannot be guaranteed. The algorithm uses multi-dimensional quasi-random sampling for load balancing, aimed at achieving the best visual quality across the whole animation even in the presence of faults. The results show that the presented algorithm is largely insensitive to temporal variations in computational power of a desktop grid, making it suitable for employing in deadline-driven production environments.