Show simple item record

dc.contributor.authorDeng, Zhigangen_US
dc.contributor.authorNeumann, Ulrichen_US
dc.contributor.editorMarie-Paule Cani and James O'Brienen_US
dc.date.accessioned2014-01-29T07:25:09Z
dc.date.available2014-01-29T07:25:09Z
dc.date.issued2006en_US
dc.identifier.isbn3-905673-34-7en_US
dc.identifier.issn1727-5288en_US
dc.identifier.urihttp://dx.doi.org/10.2312/SCA/SCA06/251-259en_US
dc.description.abstractThis paper presents a novel data-driven system for expressive facial animation synthesis and editing. Given novel phoneme-aligned speech input and its emotion modifiers (specifications), this system automatically generates expressive facial animation by concatenating captured motion data while animators establish constraints and goals. A constrained dynamic programming algorithm is used to search for best-matched captured motion nodes by minimizing a cost function. Users optionally specify hard constraints" (motion-node constraints for expressing phoneme utterances) and soft constraints" (emotion modifiers) to guide the search process. Users can also edit the processed facial motion node database by inserting and deleting motion nodes via a novel phoneme-Isomap interface. Novel facial animation synthesis experiments and objective trajectory comparisons between synthesized facial motion and captured motion demonstrate that this system is effective for producing realistic expressive facial animations.en_US
dc.publisherThe Eurographics Associationen_US
dc.subjectCategories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism; I.6.8 [Simulation and Modeling]: Types of Simulation;en_US
dc.titleeFASE: Expressive Facial Animation Synthesis and Editing with Phoneme-Isomap Controlsen_US
dc.description.seriesinformationACM SIGGRAPH / Eurographics Symposium on Computer Animationen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record