Easy Generation of Facial Animation Using Motion Graphs
No Thumbnail Available
Date
2018
Journal Title
Journal ISSN
Volume Title
Publisher
© 2018 The Eurographics Association and John Wiley & Sons Ltd.
Abstract
Facial animation is a time‐consuming and cumbersome task that requires years of experience and/or a complex and expensive set‐up. This becomes an issue, especially when animating the multitude of secondary characters required, e.g. in films or video‐games. We address this problem with a novel technique that relies on motion graphs to represent a landmarked database. Separate graphs are created for different facial regions, allowing a reduced memory footprint compared to the original data. The common poses are identified using a Euclidean‐based similarity metric and merged into the same node. This process traditionally requires a manually chosen threshold, however, we simplify it by optimizing for the desired graph compression. Motion synthesis occurs by traversing the graph using Dijkstra's algorithm, and coherent noise is introduced by swapping some path nodes with their neighbours. Expression labels, extracted from the database, provide the control mechanism for animation. We present a way of creating facial animation with reduced input that automatically controls timing and pose detail. Our technique easily fits within video‐game and crowd animation contexts, allowing the characters to be more expressive with less effort. Furthermore, it provides a starting point for content creators aiming to bring more life into their characters.Facial animation is a time‐consuming and cumbersome task that requires years of experience and/or a complex and expensive set‐up. This becomes an issue, especially when animating the multitude of secondary characters required, e.g. in films or video‐games. We address this problem with a novel technique that relies on motion graphs to represent a landmarked database. Separate graphs are created for different facial regions, allowing a reduced memory footprint compared to the original data. This process traditionally requires a manually chosen threshold, however, we simplify it by optimizing for the desired graph compression. Motion synthesis occurs by traversing the graph, with coherent noise introduced by varying the optimal path that connects the desired nodes. Expression labels, extracted from the database, provide an intuitive control mechanism for animation. Our technique easily fits within video‐game and crowd animation contexts, allowing the characters to be more expressive with less effort.
Description
@article{10.1111:cgf.13218,
journal = {Computer Graphics Forum},
title = {{Easy Generation of Facial Animation Using Motion Graphs}},
author = {Serra, J. and Cetinaslan, O. and Ravikumar, S. and Orvalho, V. and Cosker, D.},
year = {2018},
publisher = {© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.13218}
}