Expressive Facial Gestures From Motion Capture Data

dc.contributor.authorJu, Eunjungen_US
dc.contributor.authorLee, Jeheeen_US
dc.date.accessioned2015-02-21T16:18:51Z
dc.date.available2015-02-21T16:18:51Z
dc.date.issued2008en_US
dc.description.abstractHuman facial gestures often exhibit such natural stochastic variations as how often the eyes blink, how often the eyebrows and the nose twitch, and how the head moves while speaking. The stochastic movements of facial features are key ingredients for generating convincing facial expressions. Although such small variations have been simulated using noise functions in many graphics applications, modulating noise functions to match natural variations induced from the affective states and the personality of characters is difficult and not intuitive. We present a technique for generating subtle expressive facial gestures (facial expressions and head motion) semi-automatically from motion capture data. Our approach is based on Markov random fields that are simulated in two levels. In the lower level, the coordinated movements of facial features are captured, parameterized, and transferred to synthetic faces using basis shapes. The upper level represents independent stochastic behavior of facial features. The experimental results show that our system generates expressive facial gestures synchronized with input speech.en_US
dc.description.number2en_US
dc.description.seriesinformationComputer Graphics Forumen_US
dc.description.volume27en_US
dc.identifier.doi10.1111/j.1467-8659.2008.01135.xen_US
dc.identifier.issn1467-8659en_US
dc.identifier.pages381-388en_US
dc.identifier.urihttps://doi.org/10.1111/j.1467-8659.2008.01135.xen_US
dc.publisherThe Eurographics Association and Blackwell Publishing Ltden_US
dc.titleExpressive Facial Gestures From Motion Capture Dataen_US
Files