Lazalde, Oscar M. MartinezMaddock, SteveJohn Collomosse and Ian Grimstead2014-01-312014-01-312010978-3-905673-75-3https://doi.org/10.2312/LocalChapterEvents/TPCG/TPCG10/199-206A common approach to producing visual speech is to interpolate the parameters describing a sequence of mouth shapes, known as visemes, where visemes are the visual counterpart of phonemes. A single viseme typically represents a group of phonemes that are visually similar. Often these visemes are based on the static poses used in producing a phoneme. In this paper we investigate alternative representations for visemes, produced using motion-captured data, in conjunction with a constraint-based approach for visual speech production. We show that using visemes which incorporate more contextual information produces better results that using static pose visemes.Comparison of Different Types of Visemes using a Constraint-based Coarticulation Model