Evaluating the Effect of Multimodal Scenario Cues in an LLM-Supported Auditory VR Design System for Exposure Therapy

dc.contributor.authorYamauchi, Yutaen_US
dc.contributor.authorTsuji, Yutaen_US
dc.contributor.authorIno, Keikoen_US
dc.contributor.authorSakaguchi, Masanorien_US
dc.contributor.authorZempo, Keiichien_US
dc.contributor.editorJorge, Joaquim A.en_US
dc.contributor.editorSakata, Nobuchikaen_US
dc.date.accessioned2025-11-26T09:21:50Z
dc.date.available2025-11-26T09:21:50Z
dc.date.issued2025
dc.description.abstractPost-Traumatic Stress Disorder (PTSD) is a prevalent disorder triggered by life-threatening trauma, and exposure therapy, which involves confronting traumatic stimuli, has been proven to be highly effective for treating PTSD. Some methods have been proposed that present patients' traumatic situations using spatial sound (Auditory VR) to provide a sense of realism during therapy (Auditory VR Exposure Therapy). However, the Auditory VR used in therapy need to be tailored for each patient, and conventionally they have been manually produced by third parties, resulting in long delays before Auditory VR exposure therapy can begin. In the previous work, they developed a system that enables the creation of auditory stimuli through text-only interaction, allowing clinicians and patients to generate sounds without the involvement of third parties. However, creating Auditory VR solely through natural language interaction proved challenging, leading to issues in usability and sound quality. In this study, while maintaining the basic approach of text-based Auditory VR generation, we developed a system that enables multimodal interaction by combining text with auditory presentation of scenarios and visual presentation of spatial designs. An evaluation experiment with participants with backgrounds in medicine or healthcare demonstrated that, compared to the previous system, the system improved usability and, based on subjective evaluations, achieved significantly higher overall sound quality ratings.en_US
dc.description.sectionheadersLarge Language Model (LLM)
dc.description.seriesinformationICAT-EGVE 2025 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments
dc.identifier.doi10.2312/egve.20251349
dc.identifier.isbn978-3-03868-278-3
dc.identifier.issn1727-530X
dc.identifier.pages10 pages
dc.identifier.urihttps://doi.org/10.2312/egve.20251349
dc.identifier.urihttps://diglib.eg.org/handle/10.2312/egve20251349
dc.publisherThe Eurographics Associationen_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectCCS Concepts: Human-centered computing → Interaction techniques; Natural language interfaces; User studies; Applied computing → Health informatics
dc.subjectHuman centered computing → Interaction techniques
dc.subjectNatural language interfaces
dc.subjectUser studies
dc.subjectApplied computing → Health informatics
dc.titleEvaluating the Effect of Multimodal Scenario Cues in an LLM-Supported Auditory VR Design System for Exposure Therapyen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
egve20251349.pdf
Size:
1.62 MB
Format:
Adobe Portable Document Format
Collections