Show simple item record

dc.contributor.authorNakamura, Fumihikoen_US
dc.contributor.authorSuzuki, Katsuhiroen_US
dc.contributor.authorMasai, Katsutoshien_US
dc.contributor.authorItoh, Yutaen_US
dc.contributor.authorSugiura, Yutaen_US
dc.contributor.authorSugimoto, Makien_US
dc.contributor.editorKakehi, Yasuaki and Hiyama, Atsushien_US
dc.date.accessioned2019-09-11T05:43:07Z
dc.date.available2019-09-11T05:43:07Z
dc.date.issued2019
dc.identifier.isbn978-3-03868-083-3
dc.identifier.issn1727-530X
dc.identifier.urihttps://doi.org/10.2312/egve.20191274
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/egve20191274
dc.description.abstractFacial expressions enrich communication via avatars. However, in common immersive virtual reality (VR) systems, facial occlusions by head-mounted displays (HMD) lead to difficulties in capturing users' faces. In particular, the mouth plays an important role in facial expressions because it is essential for rich interaction. In this paper, we propose a technique that classifies mouth shapes into six classes using optical sensors embedded in HMD and gives labels automatically to the training dataset by vowel recognition. We experiment with five subjects to compare the recognition rates of machine learning under manual and automated labeling conditions. Results show that our method achieves average classification accuracy of 99.9% and 96.3% under manual and automated labeling conditions, respectively. These findings indicate that automated labeling is competitive relative to manual labeling, although the former's classification accuracy is slightly higher than that of the latter. Furthermore, we develop an application that reflects the mouth shape on avatars. This application blends six mouth shapes and then applies the blended mouth shapes to avatars.en_US
dc.publisherThe Eurographics Associationen_US
dc.subjectHuman
dc.subjectcentered computing
dc.subjectHuman computer interaction (HCI)
dc.titleAutomatic Labeling of Training Data by Vowel Recognition for Mouth Shape Recognition with Optical Sensors Embedded in Head-Mounted Displayen_US
dc.description.seriesinformationICAT-EGVE 2019 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments
dc.description.sectionheadersSensing and Interaction
dc.identifier.doi10.2312/egve.20191274
dc.identifier.pages9-16


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record