Impact of Visual, Auditory, and Mixed Interfaces on Human-Robot Collaboration in Multi-Robot Environments

dc.contributor.authorNagahara, Takumien_US
dc.contributor.authorTechasarntikul, Nattaonen_US
dc.contributor.authorOhsita, Yuichien_US
dc.contributor.authorShimonishi, Hideyukien_US
dc.contributor.editorJorge, Joaquim A.en_US
dc.contributor.editorSakata, Nobuchikaen_US
dc.date.accessioned2025-11-26T09:21:30Z
dc.date.available2025-11-26T09:21:30Z
dc.date.issued2025
dc.description.abstractIn the field of Human Robot Collaboration (HRC) research, many studies have explored the use of visual and/or auditory cues as robot caution interfaces. However, many of these studies have focused on interfaces, such as displays of a single robot's future position or hazardous areas, without validating them in complex environments where multiple robots operate simultaneously and users need to perceive and respond to multiple robots at once. An increase in the number of robots can exceed human cognitive limits, potentially leading to a decrease in safety and operational efficiency. To achieve safe and work efficient HRC in environments with multiple robots, we proposed a design for auditory and visual augmented reality interfaces to help workers be aware of multiple robots. We evaluated both single-modal and multi-modal interfaces under varying numbers of robots in the environment to explore how user perception and safety are affected. We conducted a comparative evaluation using multiple metrics, including the number of collisions, the closest distance to a robot, interface response time, task completion time, and subjective measures. Although multi-modal interfaces can reduce the average number of collisions by approximately 19%- 49% compared to single-modal interfaces, and generally outperform them, their relative advantage diminished as the number of robots increased. This may be attributed to the physical limitations of the environment, where avoiding multiple robots simultaneously becomes inherently difficult, thereby reducing the impact of interface design on user performance.en_US
dc.description.sectionheadersSound
dc.description.seriesinformationICAT-EGVE 2025 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments
dc.identifier.doi10.2312/egve.20251344
dc.identifier.isbn978-3-03868-278-3
dc.identifier.issn1727-530X
dc.identifier.pages9 pages
dc.identifier.urihttps://doi.org/10.2312/egve.20251344
dc.identifier.urihttps://diglib.eg.org/handle/10.2312/egve20251344
dc.publisherThe Eurographics Associationen_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectCCS Concepts: Human-centered computing → Mixed / augmented reality; Human computer interaction (HCI)
dc.subjectHuman centered computing → Mixed / augmented reality
dc.subjectHuman computer interaction (HCI)
dc.titleImpact of Visual, Auditory, and Mixed Interfaces on Human-Robot Collaboration in Multi-Robot Environmentsen_US
Files
Original bundle
Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
egve20251344.pdf
Size:
18 MB
Format:
Adobe Portable Document Format
No Thumbnail Available
Name:
paper1041_mm.mp4
Size:
44.78 MB
Format:
Video MP4
Collections