Show simple item record

dc.contributor.authorHarvey, Carloen_US
dc.contributor.authorDebattista, Kurten_US
dc.contributor.authorBashford-Rogers, Thomasen_US
dc.contributor.authorChalmers, Alanen_US
dc.contributor.editorChen, Min and Zhang, Hao (Richard)en_US
dc.date.accessioned2017-03-13T18:13:02Z
dc.date.available2017-03-13T18:13:02Z
dc.date.issued2017
dc.identifier.issn1467-8659
dc.identifier.urihttp://dx.doi.org/10.1111/cgf.12793
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf12793
dc.description.abstractA major challenge in generating high‐fidelity virtual environments (VEs) is to be able to provide realism at interactive rates. The high‐fidelity simulation of light and sound is still unachievable in real time as such physical accuracy is very computationally demanding. Only recently has visual perception been used in high‐fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended to by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect spatialized directional sound has on the visual attention of a user towards rendered images. These perceptual artefacts are utilized in selective rendering pipelines via the use of multi‐modal maps. The multi‐modal maps are tested through psychophysical experiments to examine their applicability to selective rendering algorithms, with a series of fixed cost rendering functions, and are found to perform significantly better than only using image saliency maps that are naively applied to multi‐modal VEs.A major challenge in generating high‐fidelity virtual environments (VEs) is to be able to provide realism at interactive rates. The high‐fidelity simulation of light and sound is still unachievable in real time as such physical accuracy is very computationally demanding. Only recently has visual perception been used in high‐fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended to by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect spatialized directional sound has on the visual attention of a user towards rendered images. These perceptual artefacts are utilized in selective rendering pipelines via the use of multi‐modal maps.en_US
dc.publisher© 2017 The Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectmulti-modal
dc.subjectcross-modal
dc.subjectsaliency
dc.subjectsound
dc.subjectgraphics
dc.subjectselective rendering
dc.subjectI.3.3 [Computer Graphics]: Picture/Image Generation—Viewing Algorithms
dc.subjectI.4.8 [Computer Graphics]: Image Processing and Computer Vision—Scene Analysis - Object Recognition
dc.subjectI.4.8 [Computer Graphics]: Image Processing and Computer Vision—Scene Analysis - Tracking
dc.titleMulti-Modal Perception for Selective Renderingen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersArticles
dc.description.volume36
dc.description.number1
dc.identifier.doi10.1111/cgf.12793


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record