Search Results

Now showing 1 - 4 of 4
  • Item
    Fast Dynamic Facial Wrinkles
    (The Eurographics Association, 2024) Weiss, Sebastian; Chandran, Prashanth; Zoss, Gaspard; Bradley, Derek; Hu, Ruizhen; Charalambous, Panayiotis
    We present a new method to animate the dynamic motion of skin micro wrinkles under facial expression deformation. Since wrinkles are formed as a reservoir of skin for stretching, our model only deforms wrinkles that are perpendicular to the stress axis. Specifically, those wrinkles become wider and shallower when stretched, and deeper and narrower when compressed. In contrast to previous methods that attempted to modify the neutral wrinkle displacement map, our approach is to modify the way wrinkles are constructed in the displacement map. To this end, we build upon a previous synthetic wrinkle generator that allows us to control the width and depth of individual wrinkles when generated on a per-frame basis. Furthermore, since constructing a displacement map per frame of animation is costly, we present a fast approximation approach using pre-computed displacement maps of wrinkles binned by stretch direction, which can be blended interactively in a shader. We compare both our high quality and fast methods with previous techniques for wrinkle animation and demonstrate that our work retains more realistic details.
  • Item
    Next Generation 3D Face Models
    (The Eurographics Association, 2024) Chandran, Prashanth; Yang, Lingchen; Mania, Katerina; Artusi, Alessandro
    Having a compact, expressive and artist friendly way to represent and manipulate human faces has been of prime interest to the visual effects community for the past several decades as face models play a very important role in many face capture workflows. In this short course, we go over the evolution of 3D face models used to model and animate facial identity and expression in the computer graphics community, and discuss how the recent emergence of deep face models is transforming this landscape by enabling new artistic choices. In this first installment, the course will take the audience through the evolution of face models, starting with simple blendshape models introduced in the 1980s; that continue to be extremely popular today, to recent deep shape models that utilize neural networks to represent and manipulate face shapes in an artist friendly fashion. As the course is meant to be beginner friendly, the course will commence with a quick introduction to non-neural parametric shape models starting with linear blendshape and morphable models. We will then switch focus to deep shape models, particularly those that offer intuitive control to artists. We will discuss multiple variants of such deep face models that i) allow semantic control, ii) are agnostic to the underlying topology of the manipulated shape, iii) provide the ability to explicitly model a sequence of 3D shapes or animations, and iv) allow for the simulation of physical effects. Applications that will be discussed include face shape synthesis, identity and expression interpolation, rig generation, performance retargeting, animation synthesis and more.
  • Item
    Next Generation 3D Face Models
    (The Eurographics Association, 2025) Chandran, Prashanth; Mantiuk, Rafal; Hildebrandt, Klaus
    Data driven 3D face models are an important tool for applications like facial animation, face reconstruction and tracking and can serve as a powerful prior for the complex nonrigid deformation of human faces. While linear 3D morphable models or 3DMMs have been traditionally employed by artists to cater to these applications, in the last few years several deep face models have been introduced that make use of neural networks to manipulate face shapes and offer greater flexibility while also retaining the intuitive control of traditional face models. This recent class of semantic deep face models have the potential to simplify existing facial animation workflows and enable artists to make a wider range of creative choices. However, as these neural tools are still very recent and fresh out of academic research, there is a need to start a conversation with artists and industry professionals on how such neural networks can be incorporated into existing workflows. This course aims to take a first step in this direction by providing a gentle introduction to several types of deep face models introduced in recent years by the academia and how each of them resolve several problems encountered in conventional facial animation. The primary intention of the course is to provide artists and industry professionals with an understanding of the state of art in neural 3D face models, and to inspire them to consider how these new tools can be incorporated into existing industry workflows to produce better content faster. The course will also serve the purpose of providing a gentle introduction to face modeling and animation to students looking to get familiar with the field. Experienced participants with a strong background in the field would also be able to identify possible directions for future research. The course will be presented in a lecture format with slides. Concepts from related papers will be explained in enough detail to help the audience make informed decisions on using these tools and understand their current shortcomings.
  • Item
    Neural Facial Deformation Transfer
    (The Eurographics Association, 2025) Chandran, Prashanth; Ciccone, Loïc; Zoss, Gaspard; Bradley, Derek; Ceylan, Duygu; Li, Tzu-Mao
    We address the practical problem of generating facial blendshapes and reference animations for a new 3D character in production environments where blendshape expressions and reference animations are readily available on a pre-defined template character. We propose Neural Facial Deformation Transfer (NFDT); a data-driven approach to transfer facial expressions from such a template character to new target characters given only the target's neutral shape. To accomplish this, we first present a simple data generation strategy to automatically create a large training dataset consisting of pairs of template and target character shapes in the same expression. We then leverage this dataset through a decoder-only transformer that transfers facial expressions from the template character to a target character in high fidelity. Through quantitative evaluations and a user study, we demonstrate that NFDT surpasses the previous state-of-the-art in facial expression transfer. NFDT provides good results across varying mesh topologies, generalizes to humanoid creatures, and can save time and cost in facial animation workflows.