Search Results

Now showing 1 - 3 of 3
  • Item
    Light Clustering for Dynamic Image Based Lighting
    (The Eurographics Association, 2012) Staton, Sam; Debattista, Kurt; Bashford-Rogers, Thomas; Chalmers, Alan; Hamish Carr and Silvester Czanner
    High Dynamic Range (HDR) imagery has made it possible to relight virtual objects accurately with the captured lighting. This technique, called Image Based Lighting (IBL), is a commonly used to render scenes using real-world illumination. IBL has mostly been limited to static scenes due to limitations of HDR capture. However, recently there has been progress on developing devices which can capture HDR video sequences. These can be also be used to light virtual environments dynamically. If existing IBL algorithms are applied to this dynamic problem, temporal artifacts viewed as flickering can often arise due to samples being selected from different parts of the environment in consecutive frames. In this paper we present a method for efficiently rendering virtual scenarios with such captured sequences based on spatial and temporal clustering. Our proposed Dynamic IBL (DIBL) method improves temporal quality by suppressing flickering, and we demonstrate the application to fast previews of scenes lit by video environment maps.
  • Item
    Backwards Compatible JPEG Stereoscopic High Dynamic Range Imaging
    (The Eurographics Association, 2012) Selmanovic, Elmedin; Debattista, Kurt; Bashford-Rogers, Thomas; Chalmers, Alan; Hamish Carr and Silvester Czanner
    In this paper we introduce Stereoscopic High Dynamic Range (SHDR) Imagery which is a novel tecnique that combines high dynamic range imaging and stereoscopy. Stereoscopic imaging captures two images representing the views of both eyes and allows for better depth perception. High dynamic range (HDR) imaging is an emerging technology which allows the capture, storage and display of real world lighting as opposed to traditional imagery which only captures a restricted range of light due to limitation in hardware capture and displays. HDR provides better contrast and more natural looking scenes. One of the main challenges that needs to be overcome for SHDR to be successful is an efficient storage format that compresses the very large sizes obtained by SHDR if left uncompressed; stereoscopic imaging requires the storage of two images and uncompressed HDR requires the storage of a floating point value per colour channel per pixel. In this paper we present a number of SHDR compression methods that are backward compatible with traditional JPEG, stereo JPEG and JPEG-HDR. The proposed methods can encode SHDR content to little more than that of a traditional LDR image and the backward compatibility property encourages early adopters to adopt the format since their content will still be viewable by any of the legacy viewers.
  • Item
    Fast Scalable k-NN Computation for Very Large Point Clouds
    (The Eurographics Association, 2012) Spina, Sandro; Debattista, Kurt; Bugeja, Keith; Chalmers, Alan; Hamish Carr and Silvester Czanner
    The process of reconstructing virtual representations of large real-world sites is traditionally carried out through the use of laser scanning technology. Recent advances in these technologies led to improvements in precision and accuracy and higher sampling rates. State of the art laser scanners are capable of acquiring around a million points per second, generating enormous point cloud data sets. These data sets are usually cleaned through the application of numerous post-processing algorithms, like normal determination, clustering and noise removal. A common factor in these algorithms is the recurring need for the computation of point neighborhoods, usually by applying algorithms to compute the k-nearest neighbours of each point. The majority of these algorithms work under the assumption that the data sets operated on can fit in main memory, while others take into account the size of the data sets and are thus designed to keep data on disk. We present a hybrid approach which exploits the spatial locality of point clusters in the point cloud and loads them in system memory on demand by taking advantage of paged virtual memory in modern operating systems. In this way, we maximize processor utilization while keeping I/O overheads to a minimum. We evaluate our approach on point cloud sizes ranging from 50K to 333M points on machines with 1GB, 2GB, 4GB and 8GB of system memory.