2 results
Search Results
Now showing 1 - 2 of 2
Item A Calibrated Olfactory Display for High Fidelity Virtual Environments(The Eurographics Association, 2016) Dhokia, Amar; Doukakis, Efstratious; Asadipour, Ali; Harvey, Carlo; Bashford-Rogers, Thomas; Debattista, Kurt; Waterfield, Brian; Chalmers, Alan; Cagatay Turkay and Tao Ruan WanOlfactory displays provide a means to reproduce olfactory stimuli for use in virtual environments. Many of the designs produced by researchers, strive to provide stimuli quickly to users and focus on improving usability and portability, yet concentrate less on providing high levels of accuracy to improve the fidelity of odour delivery. This paper provides the guidance to build a reproducible and low cost olfactory display which is able to provide odours to users in a virtual environment at accurate concentration levels that are typical in everyday interactions; this includes ranges of concentration below parts per million and into parts per billion. This paper investigates build concerns of the olfactometer and its proper calibration in order to ensure concentration accuracy of the device. An analysis is provided on the recovery rates of a specific compound after excitation. This analysis provides insight into how this result can be generalisable to the recovery rates of any volatile organic compound, given knowledge of the specific vapour pressure of the compound.Item Selective BRDFs for High Fidelity Rendering(The Eurographics Association, 2016) Bradley, Tim; Debattista, Kurt; Bashford-Rogers, Thomas; Harvey, Carlo; Doukakis, Stratos; Chalmers, Alan; Cagatay Turkay and Tao Ruan WanHigh fidelity rendering systems rely on accurate material representations to produce a realistic visual appearance. However, these accurate models can be slow to evaluate. This work presents an approach for approximating these high accuracy reflectance models with faster, less complicated functions in regions of an image which possess low visual importance. A subjective rating experiment was conducted in which thirty participants were asked to assess the similarity of scenes rendered with low quality reflectance models, a high quality data-driven model and saliency based hybrids of those images. In two out of the three scenes that were evaluated significant differences were not found between the hybrid and reference images. This implies that in less visually salient regions of an image computational gains can be achieved by approximating computationally expensive materials with simpler analytic models.