• Login
    View Item 
    •   Eurographics DL Home
    • Eurographics Workshops and Symposia
    • EGVE: Eurographics Workshop on Virtual Environments
    • ICAT-EGVE2019
    • View Item
    •   Eurographics DL Home
    • Eurographics Workshops and Symposia
    • EGVE: Eurographics Workshop on Virtual Environments
    • ICAT-EGVE2019
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Automatic Labeling of Training Data by Vowel Recognition for Mouth Shape Recognition with Optical Sensors Embedded in Head-Mounted Display

    Thumbnail
    View/Open
    009-016.pdf (2.975Mb)
    Date
    2019
    Author
    Nakamura, Fumihiko ORCID
    Suzuki, Katsuhiro
    Masai, Katsutoshi
    Itoh, Yuta
    Sugiura, Yuta
    Sugimoto, Maki
    Pay-Per-View via TIB Hannover:

    Try if this item/paper is available.

    Metadata
    Show full item record
    Abstract
    Facial expressions enrich communication via avatars. However, in common immersive virtual reality (VR) systems, facial occlusions by head-mounted displays (HMD) lead to difficulties in capturing users' faces. In particular, the mouth plays an important role in facial expressions because it is essential for rich interaction. In this paper, we propose a technique that classifies mouth shapes into six classes using optical sensors embedded in HMD and gives labels automatically to the training dataset by vowel recognition. We experiment with five subjects to compare the recognition rates of machine learning under manual and automated labeling conditions. Results show that our method achieves average classification accuracy of 99.9% and 96.3% under manual and automated labeling conditions, respectively. These findings indicate that automated labeling is competitive relative to manual labeling, although the former's classification accuracy is slightly higher than that of the latter. Furthermore, we develop an application that reflects the mouth shape on avatars. This application blends six mouth shapes and then applies the blended mouth shapes to avatars.
    BibTeX
    @inproceedings {ve.20191274,
    booktitle = {ICAT-EGVE 2019 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
    editor = {Kakehi, Yasuaki and Hiyama, Atsushi},
    title = {{Automatic Labeling of Training Data by Vowel Recognition for Mouth Shape Recognition with Optical Sensors Embedded in Head-Mounted Display}},
    author = {Nakamura, Fumihiko and Suzuki, Katsuhiro and Masai, Katsutoshi and Itoh, Yuta and Sugiura, Yuta and Sugimoto, Maki},
    year = {2019},
    publisher = {The Eurographics Association},
    ISSN = {1727-530X},
    ISBN = {978-3-03868-083-3},
    DOI = {10.2312/egve.20191274}
    }
    URI
    https://doi.org/10.2312/egve.20191274
    https://diglib.eg.org:443/handle/10.2312/egve20191274
    Collections
    • ICAT-EGVE2019

    Eurographics Association copyright © 2013 - 2020 
    Send Feedback | Contact - Imprint | Data Privacy Policy | Disable Google Analytics
    Theme by @mire NV
    System hosted at  Graz University of Technology.
    TUGFhA
     

     

    Browse

    All of Eurographics DLCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    LoginRegister

    BibTeX | TOC

    Create BibTeX Create Table of Contents

    Eurographics Association copyright © 2013 - 2020 
    Send Feedback | Contact - Imprint | Data Privacy Policy | Disable Google Analytics
    Theme by @mire NV
    System hosted at  Graz University of Technology.
    TUGFhA