Wang, MengjiaoBradley, DerekZafeiriou, StefanosBeeler, ThaboPanozzo, Daniele and Assarsson, Ulf2020-05-242020-05-2420201467-8659https://doi.org/10.1111/cgf.13926https://diglib.eg.org:443/handle/10.1111/cgf13926We present a practical method to synthesize plausible 3D facial expressions for a particular target subject. The ability to synthesize an entire facial rig from a single neutral expression has a large range of applications both in computer graphics and computer vision, ranging from the efficient and cost-effective creation of CG characters to scalable data generation for machine learning purposes. Unlike previous methods based on multilinear models, the proposed approach is capable to extrapolate well outside the sample pool, which allows it to plausibly predict the identity of the target subject and create artifact free expression shapes while requiring only a small input dataset. We introduce global-local multilinear models that leverage the strengths of expression-specific and identity-specific local models combined with coarse motion estimations from a global model. Experimental results show that we achieve high-quality, plausible facial expression synthesis results for an individual that outperform existing methods both quantitatively and qualitatively.Computing methodologiesComputer vision representationsShape modelingFacial Expression Synthesis using a Global-Local Multilinear Framework10.1111/cgf.13926235-245