Show simple item record

dc.contributor.authorOstermann, Jörnen_US
dc.contributor.authorWeissenfeld, Axelen_US
dc.contributor.authorLiu, Kangen_US
dc.contributor.editorMike Chantleren_US
dc.date.accessioned2016-02-11T13:30:55Z
dc.date.available2016-02-11T13:30:55Z
dc.date.issued2005en_US
dc.identifier.isbn3-905673-57-6en_US
dc.identifier.urihttp://dx.doi.org/10.2312/vvg.20051020en_US
dc.description.abstractFacial animation has been combined with text-to-speech synthesis to create innovative multimodal interfaces. In this lecture, we present the technology and architecture in order to use this multimodal interface in an web-based environment to support education, entertainment and e-commerce applications. Modern text to speech synthesizers using concatenative speech synthesis are able to generate high quality speech. Face animation uses the phoneme and timing information provided by such a speech synthesizer in order to animate the mouth. There are 2 basic technologies that are used to render talking faces: 3D face models as described in MPEG-4 may be used to provide the impression of a talking cartoon or human-like character. Sample-based face models generated from recorded video enable the synthesis of a talking head that cannot be distinguished from a real person. Depending on the chosen face animation technology and latency requirements, different architectures for delivering the talking head over the Internet are required for interactive applications. Keywords: Face animation, visual speechen_US
dc.publisherThe Eurographics Associationen_US
dc.titleTalking Faces - Technologies and Applicationsen_US
dc.description.seriesinformationVision, Video, and Graphics (2005)en_US
dc.description.sectionheadersKeynote 3en_US
dc.identifier.doi10.2312/vvg.20051020en_US
dc.identifier.pages157-157en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

  • VVG05
    ISBN 3-905673-57-6

Show simple item record