31 results
Search Results
Now showing 1 - 10 of 31
Item Point Cloud Segmentation for Cultural Heritage Sites(The Eurographics Association, 2011) Spina, Sandro; Debattista, Kurt; Bugeja, Keith; Chalmers, Alan; Franco Niccolucci and Matteo Dellepiane and Sebastian Pena Serna and Holly Rushmeier and Luc Van GoolOver the past few years, the acquisition of 3D point information representing the structure of real-world objects has become common practice in many areas. This is particularly true in the Cultural Heritage (CH) domain, where point clouds reproducing important and usually unique artifacts and sites of various sizes and geometric complexities are acquired. Specialized software is then usually used to process and organise this data. This paper addresses the problem of automatically organising this raw data by segmenting point clouds into meaningful subsets. This organisation over raw data entails a reduction in complexity and facilitates the post-processing effort required to work with the individual objects in the scene. This paper describes an efficient two-stage segmentation algorithm which is able to automatically partition raw point clouds. Following an intial partitioning of the point cloud, a RanSaC-based plane fitting algorithm is used in order to add a further layer of abstraction. A number of potential uses of the newly processed point cloud are presented; one of which is object extraction using point cloud queries. Our method is demonstrated on three point clouds ranging from 600K to 1.9M points. One of these point clouds was acquired from the pre-historic temple of Mnajdra consistsing of multiple adjacent complex structures.Item Rendering Interior Cultural Heritage Scenes Using Image-based Shooting(The Eurographics Association, 2011) Happa, Jassim; Bashford-Rogers, Tom; Debattista, Kurt; Chalmers, Alan; A. Day and R. Mantiuk and E. Reinhard and R. ScopignoRendering interior cultural heritage scenes using physically based rendering with outdoor environment maps is computationally expensive using ray tracing methods, and currently difficult for interactive applications without significant precomputation of lighting. In this paper, we present a novel approach to relight synthetic interior scenes by extending image-based lighting to generate fast high-quality interactive previews of these environments. Interior light probes are acquired from a real scene, then used to shoot light onto the virtual scene geometry to accelerate image synthesis by assuming the light sources shot act as the correct solution of light transport for that particular intersection point. We term this approach Image-Based Shooting. It is demonstrated in this paper with an approach inspired by Irradiance Cache Splatting. The methodology is well-suited for interior scenes in which light enters through narrow windows and doors, common at cultural heritage sites. Our implementation generates high-quality interactive preview renditions of these sites and can significantly aid documentation, 3D model validation and predictive rendering. The method can easily be integrated with existing cultural heritage reconstruction pipelines, especially ray tracing based renderers.Item ExpandNet: A Deep Convolutional Neural Network for High Dynamic Range Expansion from Low Dynamic Range Content(The Eurographics Association and John Wiley & Sons Ltd., 2018) Marnerides, Demetris; Bashford-Rogers, Thomas; Hatchett, Jon; Debattista, Kurt; Gutierrez, Diego and Sheffer, AllaHigh dynamic range (HDR) imaging provides the capability of handling real world lighting as opposed to the traditional low dynamic range (LDR) which struggles to accurately represent images with higher dynamic range. However, most imaging content is still available only in LDR. This paper presents a method for generating HDR content from LDR content based on deep Convolutional Neural Networks (CNNs) termed ExpandNet. ExpandNet accepts LDR images as input and generates images with an expanded range in an end-to-end fashion. The model attempts to reconstruct missing information that was lost from the original signal due to quantization, clipping, tone mapping or gamma correction. The added information is reconstructed from learned features, as the network is trained in a supervised fashion using a dataset of HDR images. The approach is fully automatic and data driven; it does not require any heuristics or human expertise. ExpandNet uses a multiscale architecture which avoids the use of upsampling layers to improve image quality. The method performs well compared to expansion/inverse tone mapping operators quantitatively on multiple metrics, even for badly exposed inputs.Item 2009 Eurographics Symposium on Parallel Graphics and Visualization(The Eurographics Association and Blackwell Publishing Ltd, 2009) Comba, Joao; Daniel, Weiskopf; Debattista, KurtItem High Dynamic Range Imaging and Low Dynamic Range Expansion for Generating HDR Content(The Eurographics Association, 2009) Banterle, Francesco; Debattista, Kurt; Artusi, Alessandro; Pattanaik, Sumanta; Myszkowski, Karol; Ledda, Patrick; Bloj, Marina; Chalmers, Alan; M. Pauly and G. GreinerIn the last few years, researchers in the field of High Dynamic Range (HDR) Imaging have focused on providing tools for expanding Low Dynamic Range (LDR) content for the generation of HDR images due to the growing popularity of HDR in applications, such as photography and rendering via Image-Based Lighting, and the imminent arrival of HDR displays to the consumer market. LDR content expansion is required due to the lack of fast and reliable consumer level HDR capture for still images and videos. Furthermore, LDR content expansion, will allow the re-use of legacy LDR stills, videos and LDR applications created, over the last century and more, to be widely available. The use of certain LDR expansion methods, those that are based on the inversion of tone mapping operators, has made it possible to create novel compression algorithms that tackle the problem of the size of HDR content storage, which remains one of the major obstacles to be overcome for the adoption of HDR. These methods are used in conjunction with traditional LDR compression methods and can evolve accordingly. The goal of this report is to provide a comprehensive overview on HDR Imaging, and an in depth review on these emerging topics.Item Light Clustering for Dynamic Image Based Lighting(The Eurographics Association, 2012) Staton, Sam; Debattista, Kurt; Bashford-Rogers, Thomas; Chalmers, Alan; Hamish Carr and Silvester CzannerHigh Dynamic Range (HDR) imagery has made it possible to relight virtual objects accurately with the captured lighting. This technique, called Image Based Lighting (IBL), is a commonly used to render scenes using real-world illumination. IBL has mostly been limited to static scenes due to limitations of HDR capture. However, recently there has been progress on developing devices which can capture HDR video sequences. These can be also be used to light virtual environments dynamically. If existing IBL algorithms are applied to this dynamic problem, temporal artifacts viewed as flickering can often arise due to samples being selected from different parts of the environment in consecutive frames. In this paper we present a method for efficiently rendering virtual scenarios with such captured sequences based on spatial and temporal clustering. Our proposed Dynamic IBL (DIBL) method improves temporal quality by suppressing flickering, and we demonstrate the application to fast previews of scenes lit by video environment maps.Item Multi-Modal Perception for Selective Rendering(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Harvey, Carlo; Debattista, Kurt; Bashford-Rogers, Thomas; Chalmers, Alan; Chen, Min and Zhang, Hao (Richard)A major challenge in generating high‐fidelity virtual environments (VEs) is to be able to provide realism at interactive rates. The high‐fidelity simulation of light and sound is still unachievable in real time as such physical accuracy is very computationally demanding. Only recently has visual perception been used in high‐fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended to by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect spatialized directional sound has on the visual attention of a user towards rendered images. These perceptual artefacts are utilized in selective rendering pipelines via the use of multi‐modal maps. The multi‐modal maps are tested through psychophysical experiments to examine their applicability to selective rendering algorithms, with a series of fixed cost rendering functions, and are found to perform significantly better than only using image saliency maps that are naively applied to multi‐modal VEs.A major challenge in generating high‐fidelity virtual environments (VEs) is to be able to provide realism at interactive rates. The high‐fidelity simulation of light and sound is still unachievable in real time as such physical accuracy is very computationally demanding. Only recently has visual perception been used in high‐fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended to by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect spatialized directional sound has on the visual attention of a user towards rendered images. These perceptual artefacts are utilized in selective rendering pipelines via the use of multi‐modal maps.Item A Calibrated Olfactory Display for High Fidelity Virtual Environments(The Eurographics Association, 2016) Dhokia, Amar; Doukakis, Efstratious; Asadipour, Ali; Harvey, Carlo; Bashford-Rogers, Thomas; Debattista, Kurt; Waterfield, Brian; Chalmers, Alan; Cagatay Turkay and Tao Ruan WanOlfactory displays provide a means to reproduce olfactory stimuli for use in virtual environments. Many of the designs produced by researchers, strive to provide stimuli quickly to users and focus on improving usability and portability, yet concentrate less on providing high levels of accuracy to improve the fidelity of odour delivery. This paper provides the guidance to build a reproducible and low cost olfactory display which is able to provide odours to users in a virtual environment at accurate concentration levels that are typical in everyday interactions; this includes ranges of concentration below parts per million and into parts per billion. This paper investigates build concerns of the olfactometer and its proper calibration in order to ensure concentration accuracy of the device. An analysis is provided on the recovery rates of a specific compound after excitation. This analysis provides insight into how this result can be generalisable to the recovery rates of any volatile organic compound, given knowledge of the specific vapour pressure of the compound.Item Selective BRDFs for High Fidelity Rendering(The Eurographics Association, 2016) Bradley, Tim; Debattista, Kurt; Bashford-Rogers, Thomas; Harvey, Carlo; Doukakis, Stratos; Chalmers, Alan; Cagatay Turkay and Tao Ruan WanHigh fidelity rendering systems rely on accurate material representations to produce a realistic visual appearance. However, these accurate models can be slow to evaluate. This work presents an approach for approximating these high accuracy reflectance models with faster, less complicated functions in regions of an image which possess low visual importance. A subjective rating experiment was conducted in which thirty participants were asked to assess the similarity of scenes rendered with low quality reflectance models, a high quality data-driven model and saliency based hybrids of those images. In two out of the three scenes that were evaluated significant differences were not found between the hybrid and reference images. This implies that in less visually salient regions of an image computational gains can be achieved by approximating computationally expensive materials with simpler analytic models.Item Visual Saliency for Smell Impulses and Application to Selective Rendering(The Eurographics Association, 2011) Harvey, Carlo; Bashford-Rogers, Thomas E. W.; Debattista, Kurt; Chalmers, Alan; Ian Grimstead and Hamish CarrA major challenge in generating high-fidelity virtual environments is to be able to provide interactive rates of realism. However this is very computationally demanding and only recently visual perception has been used in high-fidelity rendering to improve performance considerably by a series of novel exploitations; to render parts of the scene that are not currently being attended by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect various smells have on the visual attention of the user when free viewing a set of engineered images. We verify the worth of investigating these saccade shifts (fast movements of the eyes) due to attention distraction to a congruent smell object. By analysing the gaze points, we identify time spent attending a particular area of a scene. We also present a technique from measured data to remodulate traditional saliency maps of image features to account for the observed results. We show that smell provides an impulse on attention to affect perception in such a way that this can be used to guide selective rendering of scenes through use of the remodulated saliency maps.