Fukuoka, Masaakiverhulst, adrienNakamura, FumihikoTakizawa, RyoMasai, KatsutoshiSugimoto, MakiKakehi, Yasuaki and Hiyama, Atsushi2019-09-112019-09-112019978-3-03868-083-31727-530Xhttps://doi.org/10.2312/egve.20191275https://diglib.eg.org:443/handle/10.2312/egve20191275Supernumerary Robotic Limbs (SRLs) can make physical activities easier, but require cooperation with the operator. To improve cooperation between the SRLs and the operator, the SRLs can try to predict the operator's intentions. A way to predict the operator's intentions is to use his/her Facial Expressions (FEs). Here we investigate the mapping between FEs and Supernumerary Robotic Arms (SRAs) commands (e.g. grab, release). To measure FEs, we used a optical sensor-based approach (here inside a HMD). The sensors data are fed to a SVM able to predict FEs. The SRAs can then carry out commands by predicting the operator's FEs (and arguably, the operator's intention). We ran a data collection study (N=10) to know which FEs assign to which robotic arm commands in a Virtual reality Environment (VE). We researched the mapping patterns by (1) performing an object reaching - grasping - releasing task using ''any'' FEs; (2) analyzing sensors data and a self-reported FE questionnaire to find the most common FEs used for a given command; (3) classifying the FEs in FEs groups. We then ran another study (N=14) to find the most effective combination of FEs groups / SRAs commands by recording task completion time. As a result, we found that the optimum combinations are: (i) Eyes + Mouth for grabbing / releasing; and (ii) Mouth for extending / contracting the arms (i.e. a along the forward axis).Computer systems organizationExternal interfaces for roboticsRealtime operating systemsSoftware and its engineeringVirtual worlds training simulationsFaceDrive: Facial Expression Driven Operation to Control Virtual Supernumerary Robotic Arms10.2312/egve.2019127517-24