Search Results

Now showing 1 - 10 of 38
  • Item
    Recreating Early Islamic Glass Lamp Lighting
    (The Eurographics Association, 2009) Jr., Joseph T. Kider; Fletcher, Rebecca L.; Yu, Nancy; Holod, Renata; Chalmers, Alan; Badler, Norman I.; Kurt Debattista and Cinzia Perlingieri and Denis Pitzalis and Sandro Spina
    Early Islamic light sources are not simple, static, uniform points, and the fixtures themselves are often combinations of glass, water, fuel and flame. Various physically based renderers such as Radiance are widely used for modeling ancient architectural scenes; however they rarely capture the true ambiance of the environment due to subtle lighting effects. Specifically, these renderers often fail to correctly model complex caustics produced by glass fixtures, water level, and fuel sources.While the original fixtures of the 8th through 10th century Mosque of Cordoba in Spain have not survived, we have applied information gathered from earlier and contemporary sites and artifacts, including those from Byzantium, to assume that it was illuminated by either single jar lamps or supported by polycandela that cast unique downward caustic lighting patterns which helped individuals to navigate and to read. To re-synthesize such lighting, we gathered experimental archaeological data and investigated and validated how various water levels and glass fixture shapes, likely used during early Islamic times, changed the overall light patterns and downward caustics. In this paper, we propose a technique called Caustic Cones, a novel data-driven method to "shape" the light emanating from the lamps to better recreate the downward lighting without resorting to computationally expensive photon mapping renderers.Additionally, we demonstrate on a rendering of the Mosque of Cordoba how our approach greatly benefits archaeologists and architectural historians by providing a more authentic visual simulation of early Islamic glass lamp lighting.
  • Item
    High Dynamic Range Video for Cultural Heritage Documentation and Experimental Archaeology
    (The Eurographics Association, 2010) Happa, Jassim; Artusi, Alessandro; Czanner, Silvester; Chalmers, Alan; Alessandro Artusi and Morwena Joly and Genevieve Lucet and Denis Pitzalis and Alejandro Ribes
    Video recording and photography are frequently used to document Cultural Heritage (CH) objects and sites. High Dynamic Range (HDR) imaging is increasingly being used as it allows a wider range of light to be considered that most current technologies are unable to natively acquire and reproduce. HDR video content however has only recently become possible at desirable, high definition resolution and dynamic range. In this paper we explore the potential use of a 20 f-stop HDR video camera for CH documentation and experimental archaeology purposes. We discuss data acquisition of moving caustics, flames, distant light and in participating media. Comparisons of Low Dynamic Range (LDR) and HDR content are made to illustrate the additional data that this new technology is able to capture, and the benefits this is likely to bring to CH documentation and experimental archaeology.
  • Item
    Point Cloud Segmentation for Cultural Heritage Sites
    (The Eurographics Association, 2011) Spina, Sandro; Debattista, Kurt; Bugeja, Keith; Chalmers, Alan; Franco Niccolucci and Matteo Dellepiane and Sebastian Pena Serna and Holly Rushmeier and Luc Van Gool
    Over the past few years, the acquisition of 3D point information representing the structure of real-world objects has become common practice in many areas. This is particularly true in the Cultural Heritage (CH) domain, where point clouds reproducing important and usually unique artifacts and sites of various sizes and geometric complexities are acquired. Specialized software is then usually used to process and organise this data. This paper addresses the problem of automatically organising this raw data by segmenting point clouds into meaningful subsets. This organisation over raw data entails a reduction in complexity and facilitates the post-processing effort required to work with the individual objects in the scene. This paper describes an efficient two-stage segmentation algorithm which is able to automatically partition raw point clouds. Following an intial partitioning of the point cloud, a RanSaC-based plane fitting algorithm is used in order to add a further layer of abstraction. A number of potential uses of the newly processed point cloud are presented; one of which is object extraction using point cloud queries. Our method is demonstrated on three point clouds ranging from 600K to 1.9M points. One of these point clouds was acquired from the pre-historic temple of Mnajdra consistsing of multiple adjacent complex structures.
  • Item
    Rendering Interior Cultural Heritage Scenes Using Image-based Shooting
    (The Eurographics Association, 2011) Happa, Jassim; Bashford-Rogers, Tom; Debattista, Kurt; Chalmers, Alan; A. Day and R. Mantiuk and E. Reinhard and R. Scopigno
    Rendering interior cultural heritage scenes using physically based rendering with outdoor environment maps is computationally expensive using ray tracing methods, and currently difficult for interactive applications without significant precomputation of lighting. In this paper, we present a novel approach to relight synthetic interior scenes by extending image-based lighting to generate fast high-quality interactive previews of these environments. Interior light probes are acquired from a real scene, then used to shoot light onto the virtual scene geometry to accelerate image synthesis by assuming the light sources shot act as the correct solution of light transport for that particular intersection point. We term this approach Image-Based Shooting. It is demonstrated in this paper with an approach inspired by Irradiance Cache Splatting. The methodology is well-suited for interior scenes in which light enters through narrow windows and doors, common at cultural heritage sites. Our implementation generates high-quality interactive preview renditions of these sites and can significantly aid documentation, 3D model validation and predictive rendering. The method can easily be integrated with existing cultural heritage reconstruction pipelines, especially ray tracing based renderers.
  • Item
    Accurate Modelling of Roman Lamps in Conimbriga using High Dynamic Range
    (The Eurographics Association, 2008) Gonçalves, Alexandrino José Marques; Magalhães, Luís Gonzaga; Moura, João Paulo; Chalmers, Alan; Michael Ashley and Sorin Hermon and Alberto Proenca and Karina Rodriguez-Echavarria
    The Human Visual System has a remarkable ability to acquire colour and contrast of all the things that surround us. This is particularly evident in extreme lighting conditions such as bright light or dark environments. However, it is simply not possible to represent such a range of lighting on a typical display today. This is about to change. The field of High Dynamic Range (HDR) imagery allows us to capture and display the full range of human vision. The use of technologies in the preservation and dissemination of cultural heritage can play an important role in the representation and interpretation of our past legacy. A major field of application is virtual reconstructions of ancient historical environments. In this domain, the way we see such (reconstructed) environments is particularly important in order to establish a correct interpretation of that historical setting. In this paper we present a case study of the reconstruction of a Roman site. We generate HDR images of mosaics and frescoes from one of the most impressive monuments in the ruins of Conimbriga, Portugal, an ancient city of the Roman Empire. We show that the HDR viewing paradigm is well suited for archaeological interpretation, since its high contrast and chromaticity can disclose and present us an enhanced viewing experience, closer to how the artefacts may have been perceived in the past. To achieve the requisite level of precision, in addition to a precise geometric 3D model, it is crucial to integrate in the virtual simulation authentic physical data of the light used in the period under consideration. Thereby in order to create a realistic physical based environment we use in our lighting simulations real data obtained from Roman luminaries of that time.
  • Item
    High Dynamic Range Imaging and Low Dynamic Range Expansion for Generating HDR Content
    (The Eurographics Association, 2009) Banterle, Francesco; Debattista, Kurt; Artusi, Alessandro; Pattanaik, Sumanta; Myszkowski, Karol; Ledda, Patrick; Bloj, Marina; Chalmers, Alan; M. Pauly and G. Greiner
    In the last few years, researchers in the field of High Dynamic Range (HDR) Imaging have focused on providing tools for expanding Low Dynamic Range (LDR) content for the generation of HDR images due to the growing popularity of HDR in applications, such as photography and rendering via Image-Based Lighting, and the imminent arrival of HDR displays to the consumer market. LDR content expansion is required due to the lack of fast and reliable consumer level HDR capture for still images and videos. Furthermore, LDR content expansion, will allow the re-use of legacy LDR stills, videos and LDR applications created, over the last century and more, to be widely available. The use of certain LDR expansion methods, those that are based on the inversion of tone mapping operators, has made it possible to create novel compression algorithms that tackle the problem of the size of HDR content storage, which remains one of the major obstacles to be overcome for the adoption of HDR. These methods are used in conjunction with traditional LDR compression methods and can evolve accordingly. The goal of this report is to provide a comprehensive overview on HDR Imaging, and an in depth review on these emerging topics.
  • Item
    Light Clustering for Dynamic Image Based Lighting
    (The Eurographics Association, 2012) Staton, Sam; Debattista, Kurt; Bashford-Rogers, Thomas; Chalmers, Alan; Hamish Carr and Silvester Czanner
    High Dynamic Range (HDR) imagery has made it possible to relight virtual objects accurately with the captured lighting. This technique, called Image Based Lighting (IBL), is a commonly used to render scenes using real-world illumination. IBL has mostly been limited to static scenes due to limitations of HDR capture. However, recently there has been progress on developing devices which can capture HDR video sequences. These can be also be used to light virtual environments dynamically. If existing IBL algorithms are applied to this dynamic problem, temporal artifacts viewed as flickering can often arise due to samples being selected from different parts of the environment in consecutive frames. In this paper we present a method for efficiently rendering virtual scenarios with such captured sequences based on spatial and temporal clustering. Our proposed Dynamic IBL (DIBL) method improves temporal quality by suppressing flickering, and we demonstrate the application to fast previews of scenes lit by video environment maps.
  • Item
    Multi-Modal Perception for Selective Rendering
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Harvey, Carlo; Debattista, Kurt; Bashford-Rogers, Thomas; Chalmers, Alan; Chen, Min and Zhang, Hao (Richard)
    A major challenge in generating high‐fidelity virtual environments (VEs) is to be able to provide realism at interactive rates. The high‐fidelity simulation of light and sound is still unachievable in real time as such physical accuracy is very computationally demanding. Only recently has visual perception been used in high‐fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended to by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect spatialized directional sound has on the visual attention of a user towards rendered images. These perceptual artefacts are utilized in selective rendering pipelines via the use of multi‐modal maps. The multi‐modal maps are tested through psychophysical experiments to examine their applicability to selective rendering algorithms, with a series of fixed cost rendering functions, and are found to perform significantly better than only using image saliency maps that are naively applied to multi‐modal VEs.A major challenge in generating high‐fidelity virtual environments (VEs) is to be able to provide realism at interactive rates. The high‐fidelity simulation of light and sound is still unachievable in real time as such physical accuracy is very computationally demanding. Only recently has visual perception been used in high‐fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended to by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect spatialized directional sound has on the visual attention of a user towards rendered images. These perceptual artefacts are utilized in selective rendering pipelines via the use of multi‐modal maps.
  • Item
    Automatic Reconstruction of Virtual Heritage Sites
    (The Eurographics Association, 2008) Rodrigues, Nuno; Magalhães, Luís Gonzaga; Moura, João Paulo; Chalmers, Alan; Michael Ashley and Sorin Hermon and Alberto Proenca and Karina Rodriguez-Echavarria
    The virtual reconstruction of heritage sites has been the focus of many projects. These typically involve significant use of manual reconstruction techniques, and thus a great deal of human effort to create the virtual structures. Also, often, there is not sufficient physical evidence to recreate these structures precisely as they may have been in the past. To address these issues a domain specific modelling method for the automatic generation of virtual heritage structures is presented in this paper. The method is guided by heritage knowledge about the construction rules of heritage structures, encoded in a formal grammar, and may be used to create new structures automatically. The case study entails the automatic reconstruction of the archaeological site of Conimbriga, in Portugal, which contains the ruins of an ancient city of the Roman Empire. The results show the generation of a virtual reconstruction of a particular house, the House of the Skeletons, which had an important relevance to the city because of its architecture.
  • Item
    A Calibrated Olfactory Display for High Fidelity Virtual Environments
    (The Eurographics Association, 2016) Dhokia, Amar; Doukakis, Efstratious; Asadipour, Ali; Harvey, Carlo; Bashford-Rogers, Thomas; Debattista, Kurt; Waterfield, Brian; Chalmers, Alan; Cagatay Turkay and Tao Ruan Wan
    Olfactory displays provide a means to reproduce olfactory stimuli for use in virtual environments. Many of the designs produced by researchers, strive to provide stimuli quickly to users and focus on improving usability and portability, yet concentrate less on providing high levels of accuracy to improve the fidelity of odour delivery. This paper provides the guidance to build a reproducible and low cost olfactory display which is able to provide odours to users in a virtual environment at accurate concentration levels that are typical in everyday interactions; this includes ranges of concentration below parts per million and into parts per billion. This paper investigates build concerns of the olfactometer and its proper calibration in order to ensure concentration accuracy of the device. An analysis is provided on the recovery rates of a specific compound after excitation. This analysis provides insight into how this result can be generalisable to the recovery rates of any volatile organic compound, given knowledge of the specific vapour pressure of the compound.