Show simple item record

dc.contributor.authorFukuoka, Masaakien_US
dc.contributor.authorverhulst, adrienen_US
dc.contributor.authorNakamura, Fumihikoen_US
dc.contributor.authorTakizawa, Ryoen_US
dc.contributor.authorMasai, Katsutoshien_US
dc.contributor.authorSugimoto, Makien_US
dc.contributor.editorKakehi, Yasuaki and Hiyama, Atsushien_US
dc.date.accessioned2019-09-11T05:43:08Z
dc.date.available2019-09-11T05:43:08Z
dc.date.issued2019
dc.identifier.isbn978-3-03868-083-3
dc.identifier.issn1727-530X
dc.identifier.urihttps://doi.org/10.2312/egve.20191275
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/egve20191275
dc.description.abstractSupernumerary Robotic Limbs (SRLs) can make physical activities easier, but require cooperation with the operator. To improve cooperation between the SRLs and the operator, the SRLs can try to predict the operator's intentions. A way to predict the operator's intentions is to use his/her Facial Expressions (FEs). Here we investigate the mapping between FEs and Supernumerary Robotic Arms (SRAs) commands (e.g. grab, release). To measure FEs, we used a optical sensor-based approach (here inside a HMD). The sensors data are fed to a SVM able to predict FEs. The SRAs can then carry out commands by predicting the operator's FEs (and arguably, the operator's intention). We ran a data collection study (N=10) to know which FEs assign to which robotic arm commands in a Virtual reality Environment (VE). We researched the mapping patterns by (1) performing an object reaching - grasping - releasing task using ''any'' FEs; (2) analyzing sensors data and a self-reported FE questionnaire to find the most common FEs used for a given command; (3) classifying the FEs in FEs groups. We then ran another study (N=14) to find the most effective combination of FEs groups / SRAs commands by recording task completion time. As a result, we found that the optimum combinations are: (i) Eyes + Mouth for grabbing / releasing; and (ii) Mouth for extending / contracting the arms (i.e. a along the forward axis).en_US
dc.publisherThe Eurographics Associationen_US
dc.subjectComputer systems organization
dc.subjectExternal interfaces for robotics
dc.subjectReal
dc.subjecttime operating systems
dc.subjectSoftware and its engineering
dc.subjectVirtual worlds training simulations
dc.titleFaceDrive: Facial Expression Driven Operation to Control Virtual Supernumerary Robotic Armsen_US
dc.description.seriesinformationICAT-EGVE 2019 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments
dc.description.sectionheadersSensing and Interaction
dc.identifier.doi10.2312/egve.20191275
dc.identifier.pages17-24


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record