Search Results

Now showing 1 - 10 of 21
  • Item
    Light Clustering for Dynamic Image Based Lighting
    (The Eurographics Association, 2012) Staton, Sam; Debattista, Kurt; Bashford-Rogers, Thomas; Chalmers, Alan; Hamish Carr and Silvester Czanner
    High Dynamic Range (HDR) imagery has made it possible to relight virtual objects accurately with the captured lighting. This technique, called Image Based Lighting (IBL), is a commonly used to render scenes using real-world illumination. IBL has mostly been limited to static scenes due to limitations of HDR capture. However, recently there has been progress on developing devices which can capture HDR video sequences. These can be also be used to light virtual environments dynamically. If existing IBL algorithms are applied to this dynamic problem, temporal artifacts viewed as flickering can often arise due to samples being selected from different parts of the environment in consecutive frames. In this paper we present a method for efficiently rendering virtual scenarios with such captured sequences based on spatial and temporal clustering. Our proposed Dynamic IBL (DIBL) method improves temporal quality by suppressing flickering, and we demonstrate the application to fast previews of scenes lit by video environment maps.
  • Item
    Multi-Modal Perception for Selective Rendering
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Harvey, Carlo; Debattista, Kurt; Bashford-Rogers, Thomas; Chalmers, Alan; Chen, Min and Zhang, Hao (Richard)
    A major challenge in generating high‐fidelity virtual environments (VEs) is to be able to provide realism at interactive rates. The high‐fidelity simulation of light and sound is still unachievable in real time as such physical accuracy is very computationally demanding. Only recently has visual perception been used in high‐fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended to by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect spatialized directional sound has on the visual attention of a user towards rendered images. These perceptual artefacts are utilized in selective rendering pipelines via the use of multi‐modal maps. The multi‐modal maps are tested through psychophysical experiments to examine their applicability to selective rendering algorithms, with a series of fixed cost rendering functions, and are found to perform significantly better than only using image saliency maps that are naively applied to multi‐modal VEs.A major challenge in generating high‐fidelity virtual environments (VEs) is to be able to provide realism at interactive rates. The high‐fidelity simulation of light and sound is still unachievable in real time as such physical accuracy is very computationally demanding. Only recently has visual perception been used in high‐fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended to by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect spatialized directional sound has on the visual attention of a user towards rendered images. These perceptual artefacts are utilized in selective rendering pipelines via the use of multi‐modal maps.
  • Item
    A Calibrated Olfactory Display for High Fidelity Virtual Environments
    (The Eurographics Association, 2016) Dhokia, Amar; Doukakis, Efstratious; Asadipour, Ali; Harvey, Carlo; Bashford-Rogers, Thomas; Debattista, Kurt; Waterfield, Brian; Chalmers, Alan; Cagatay Turkay and Tao Ruan Wan
    Olfactory displays provide a means to reproduce olfactory stimuli for use in virtual environments. Many of the designs produced by researchers, strive to provide stimuli quickly to users and focus on improving usability and portability, yet concentrate less on providing high levels of accuracy to improve the fidelity of odour delivery. This paper provides the guidance to build a reproducible and low cost olfactory display which is able to provide odours to users in a virtual environment at accurate concentration levels that are typical in everyday interactions; this includes ranges of concentration below parts per million and into parts per billion. This paper investigates build concerns of the olfactometer and its proper calibration in order to ensure concentration accuracy of the device. An analysis is provided on the recovery rates of a specific compound after excitation. This analysis provides insight into how this result can be generalisable to the recovery rates of any volatile organic compound, given knowledge of the specific vapour pressure of the compound.
  • Item
    Selective BRDFs for High Fidelity Rendering
    (The Eurographics Association, 2016) Bradley, Tim; Debattista, Kurt; Bashford-Rogers, Thomas; Harvey, Carlo; Doukakis, Stratos; Chalmers, Alan; Cagatay Turkay and Tao Ruan Wan
    High fidelity rendering systems rely on accurate material representations to produce a realistic visual appearance. However, these accurate models can be slow to evaluate. This work presents an approach for approximating these high accuracy reflectance models with faster, less complicated functions in regions of an image which possess low visual importance. A subjective rating experiment was conducted in which thirty participants were asked to assess the similarity of scenes rendered with low quality reflectance models, a high quality data-driven model and saliency based hybrids of those images. In two out of the three scenes that were evaluated significant differences were not found between the hybrid and reference images. This implies that in less visually salient regions of an image computational gains can be achieved by approximating computationally expensive materials with simpler analytic models.
  • Item
    GCH 2021: Frontmatter
    (The Eurographics Association, 2021) Hulusic, Vedad; Chalmers, Alan; Hulusic, Vedad and Chalmers, Alan
  • Item
    Olfaction and Selective Rendering
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Harvey, Carlo; Bashford‐Rogers, Thomas; Debattista, Kurt; Doukakis, Efstratios; Chalmers, Alan; Chen, Min and Benes, Bedrich
    Accurate simulation of all the senses in virtual environments is a computationally expensive task. Visual saliency models have been used to improve computational performance for rendered content, but this is insufficient for multi‐modal environments. This paper considers cross‐modal perception and, in particular, if and how olfaction affects visual attention. Two experiments are presented in this paper. Firstly, eye tracking is gathered from a number of participants to gain an impression about where and how they view virtual objects when smell is introduced compared to an odourless condition. Based on the results of this experiment a new type of saliency map in a selective‐rendering pipeline is presented. A second experiment validates this approach, and demonstrates that participants rank images as better quality, when compared to a reference, for the same rendering budget.Accurate simulation of all the senses in virtual environments is a computationally expensive task. Visual saliency models have been used to improve computational performance for rendered content, but this is insufficient for multi‐modal environments. This paper considers cross‐modal perception and, in particular, if and how olfaction affects visual attention. Two experiments are presented in this paper. Firstly, eye tracking is gathered from a number of participants to gain an impression about where and how they view virtual objects when smell is introduced compared to an odourless condition.
  • Item
    A Subjective Evaluation of Texture Synthesis Methods
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Kolár, Martin; Debattista, Kurt; Chalmers, Alan; Loic Barthe and Bedrich Benes
    This paper presents the results of a user study which quantifies the relative and absolute quality of example-based texture synthesis algorithms. In order to allow such evaluation, a list of texture properties is compiled, and a minimal representative set of textures is selected to cover these. Six texture synthesis methods are compared against each other and a reference on a selection of twelve textures by non-expert participants (N = 67). Results demonstrate certain algorithms successfully solve the problem of texture synthesis for certain textures, but there are no satisfactory results for other types of texture properties. The presented textures and results make it possible for future work to be subjectively compared, thus facilitating the development of future texture synthesis methods.
  • Item
    Scene Segmentation and Understanding for Context-Free Point Clouds
    (The Eurographics Association, 2014) Spina, Sandro; Debattista, Kurt; Bugeja, Keith; Chalmers, Alan; John Keyser and Young J. Kim and Peter Wonka
    The continuous development of new commodity hardware intended to capture the surface structure of objects is quickly making point cloud data ubiquitous. Scene understanding methods address the problem of determining the objects present in a point cloud which, dependant on sensor capabilities and object occlusions, is normally noisy and incomplete. In this paper, we propose a novel technique which enables automatic identification of semantically meaningful structures within point clouds acquired using different sensors on a variety of scenes. The representation model, namely the structure graph, with nodes representing planar surface segments, is computed over these point clouds to help with the identification task. In order to accommodate for more complex objects (e.g. chair, couch, cabinet, table), a training process is used to determine and concisely describe, within each object's structure graph, its important shape characteristics. Results on a variety of point clouds show how our method can quickly discern certain object types.
  • Item
    Backwards Compatible JPEG Stereoscopic High Dynamic Range Imaging
    (The Eurographics Association, 2012) Selmanovic, Elmedin; Debattista, Kurt; Bashford-Rogers, Thomas; Chalmers, Alan; Hamish Carr and Silvester Czanner
    In this paper we introduce Stereoscopic High Dynamic Range (SHDR) Imagery which is a novel tecnique that combines high dynamic range imaging and stereoscopy. Stereoscopic imaging captures two images representing the views of both eyes and allows for better depth perception. High dynamic range (HDR) imaging is an emerging technology which allows the capture, storage and display of real world lighting as opposed to traditional imagery which only captures a restricted range of light due to limitation in hardware capture and displays. HDR provides better contrast and more natural looking scenes. One of the main challenges that needs to be overcome for SHDR to be successful is an efficient storage format that compresses the very large sizes obtained by SHDR if left uncompressed; stereoscopic imaging requires the storage of two images and uncompressed HDR requires the storage of a floating point value per colour channel per pixel. In this paper we present a number of SHDR compression methods that are backward compatible with traditional JPEG, stereo JPEG and JPEG-HDR. The proposed methods can encode SHDR content to little more than that of a traditional LDR image and the backward compatibility property encourages early adopters to adopt the format since their content will still be viewable by any of the legacy viewers.
  • Item
    Efficient Remote Rendering Using Equirectangular Projection
    (The Eurographics Association, 2017) McNamee, Josh; Debattista, Kurt; Chalmers, Alan; Tao Ruan Wan and Franck Vidal
    Presenting high quality Virtual Reality (VR) experiences on head-mounted displays (HMDs) requires significant computational requirements. To ensure a high-fidelity experience, the displayed images must be highly accurate, detailed and respond with a very low latency. In order to achieve high-fidelity realistic experiences, advantage needs to be taken of remote high performance computing resources. This paper presents a novel method of streaming high-fidelity graphics content from a remote physically accurate renderer to an HMD. In particular, an equirectangular projection is transmitted from the cloud to a client, so that latency-free 360° observations can be made within a viewpoint.