A Computational Musco-Skeletal Model for Animating Virtual Faces
MetadataShow full item record
Automatic synthesis of facial animation in Computer Graphics is a challenging task and althoughthe problem is three decades old by now, there is still not a unified method to solveit. This is mainly due to the complex mathematical model required to reproduce the visualmeanings of facial expressions coupled with the computational speed needed to run interactiveapplications.In this thesis, there are two different proposed methods to address the problem of theanimation of 3D realistic faces at interactive rate.The first method is an integrated physically-based method which mimics the facial movementsby reproducing the anatomical structure of a human head and the interaction amongthe bony structure, the facial muscles and the skin. Differently from previously proposedapproaches in the literature, the muscles are organized in a layered, interweaving structurelaying on the skull; their shape can be affected both by the simulation of active contractionand by the motion of the underlying anatomical parts. A design tool has been developed inorder to assist the user in defining the muscles in a natural manner by sketching their shapedirectly on the already existing bones and other muscles. The dynamics of the face motion iscomputed through a position-based schema ensuring real-time performance, control and robustness.Experiments demonstrate that through this model it can be effectively synthesizedrealistic expressive facial animation on different input face models in real-time on consumerclass platforms.The second method for automatically achieving animation consists in a novel facial motioncloning technique. It is a purely geometric algorithm and it is able to transfer the motionfrom an animated source face to a different target face mesh, initially static, allowing to reusefacial motion from already animated virtual heads. Its robustness and flexibility are assessedover several input data sets.