Wampler, KevinSasaki, DaichiZhang, LiPopovic, ZoranDimitris Metaxas and Jovan Popovic2014-01-292014-01-292007978-3-905673-44-91727-5288https://doi.org/10.2312/SCA/SCA07/053-062In this work we present a method for human face animation which allows us to generate animations for a novel person given just a single mesh of their face. These animations can be of arbitrary text and may include emotional expressions. We build a multilinear model from data which encapsulates the variation in dynamic face motions over changes in identity, expression, and over different texts. We then describe a synthesis method consisting of a phoneme planning and a blending stage which uses this model as a base and attempts to preserve both face shape and dynamics given a novel text and an emotion at each point in time.Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: AnimationDynamic, Expressive Speech Animation From a Single Mesh