Visual and multimodal perception in immersive environments

dc.contributor.authorMalpica Mallo, Sandra
dc.date.accessioned2023-09-25T09:20:30Z
dc.date.available2023-09-25T09:20:30Z
dc.date.issued2023-02-22
dc.description.abstractThrough this thesis we use virtual reality (VR) as a tool to better understand human visual perception and attentional behavior. We leverage the intrinsic properties provided by VR in order to build user studies tailored to a set of different topics: VR provides increased control over sensory information when compared to traditional media, as well as more natural interactions with the environment and an increased sense of realism. These qualities, together with the feeling of presence and immersion, increase the ecological validity of user studies made in VR. Furthermore, it allows us researchers to explore closer to real-world scenarios in a safe and reproducible way. By increasing the available knowledge about visual perception we aim to provide visual computing researchers with more tools to overcome current limitations in the field, either hardware- or software-caused. Understanding human visual perception and attentional behavior is a challenging task: measuring such high-level cognitive processes is often not feasible, more so without medical-grade devices (which are commonly invasive for the user). For this reason, we settle on measuring observable data, both qualitative and quantitative. This data is further processed to obtain information about human behavior and create high-level guidelines or models when possible. We present the contributions of this thesis around two topics: visual perception of realistic stimuli and multimodal perception in immersive environments. The first one is devoted to visual appearance and has two separate contributions. First, we have created a learning-based appearance similarity metric by means of large-scale crowdsourced user studies and a deep learning model which correlates with human perception. Additionally, we study how low-level, asemantic visual features can be used to alter time perception in virtual reality, manifesting the interplay between visual and temporal perception at interval timing (several seconds to several minutes) intervals. Regarding the second topic, multimodal perception, we have first compiled an in-depth study of the state of the art of the use of different sensory modalities (visual, auditory, haptic, etc.) in immersive environments. Additionally, we have analyzed a crossmodal suppressive effect in virtual reality, where auditory cues can significantly degrade visual performance. Finally, we have shown how temporal synchronization is key to correctly perceive multimodal events and enhance their realism, even when visual quality is degraded. Ultimately, this thesis aims to increase the understanding of human behavior in immersive environments. This knowledge can not only benefit cognitive science researchers, but also computer graphics researchers, especially those in the field of VR, who will be able to use our findings to create better user experiences.en_US
dc.description.sponsorshipSandra Malpica was supported by a Gobierno de Aragon predoctoral grant (2018-2023)en_US
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/3543882
dc.language.isoenen_US
dc.subjectComputer graphics, virtual reality, audiovisual perceptionen_US
dc.titleVisual and multimodal perception in immersive environmentsen_US
dc.typeThesisen_US
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Thesis_SM.pdf
Size:
131.68 MB
Format:
Adobe Portable Document Format
Description:
Thesis document
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.79 KB
Format:
Item-specific license agreed upon to submission
Description:
Collections