Show simple item record

dc.contributor.authorLazalde, Oscar M. Martinezen_US
dc.contributor.authorMaddock, Steveen_US
dc.contributor.editorJohn Collomosse and Ian Grimsteaden_US
dc.date.accessioned2014-01-31T20:12:00Z
dc.date.available2014-01-31T20:12:00Z
dc.date.issued2010en_US
dc.identifier.isbn978-3-905673-75-3en_US
dc.identifier.urihttp://dx.doi.org/10.2312/LocalChapterEvents/TPCG/TPCG10/199-206en_US
dc.description.abstractA common approach to producing visual speech is to interpolate the parameters describing a sequence of mouth shapes, known as visemes, where visemes are the visual counterpart of phonemes. A single viseme typically represents a group of phonemes that are visually similar. Often these visemes are based on the static poses used in producing a phoneme. In this paper we investigate alternative representations for visemes, produced using motion-captured data, in conjunction with a constraint-based approach for visual speech production. We show that using visemes which incorporate more contextual information produces better results that using static pose visemes.en_US
dc.publisherThe Eurographics Associationen_US
dc.titleComparison of Different Types of Visemes using a Constraint-based Coarticulation Modelen_US
dc.description.seriesinformationTheory and Practice of Computer Graphicsen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record