Content Retargeting Using Parameter-Parallel Facial Layers

Loading...
Thumbnail Image
Date
2011
Journal Title
Journal ISSN
Volume Title
Publisher
The Eurographics Association
Abstract
Facial motion retargeting approaches often transfer expressions by establishing correspondences between shared units of motion, such as action units, or spatial correspondences of landmarks between the source actor and target character faces. When the actor and character are structurally dissimilar, shared units of motion or spatiallandmarks may not exist, and subtle styles of performance may differ. We present a method to deconstruct the content of an actor's facial expression into three parameter-parallel layers using a composition function, transfer the content to equivalent parameter-parallel layers for the character, and reconstruct the character's expression using the same composition function. Our algorithm uses the same parameter-parallel layered model of facial expression for both the actor and character, separating the content of facial expressions into emotion, speech, and eye-blink layers. Facial motion in each layer is embedded in simplicial bases, each of which encodes semantically significant configurations of the face. We show the transfer of facial motion capture and video-based tracking of the eyes and mouth of an actor to a number of faces with dissimilar facial structure and expressive disposition.
Description

        
@inproceedings{
:10.2312/SCA/SCA11/195-204
, booktitle = {
Eurographics/ ACM SIGGRAPH Symposium on Computer Animation
}, editor = {
A. Bargteil and M. van de Panne
}, title = {{
Content Retargeting Using Parameter-Parallel Facial Layers
}}, author = {
Kholgade, Natasha
and
Matthews, Iain
and
Sheikh, Yaser
}, year = {
2011
}, publisher = {
The Eurographics Association
}, ISSN = {
1727-5288
}, ISBN = {
978-1-4503-0923-3
}, DOI = {
/10.2312/SCA/SCA11/195-204
} }
Citation