Search Results

Now showing 1 - 3 of 3
  • Item
    Fast Dynamic Facial Wrinkles
    (The Eurographics Association, 2024) Weiss, Sebastian; Chandran, Prashanth; Zoss, Gaspard; Bradley, Derek; Hu, Ruizhen; Charalambous, Panayiotis
    We present a new method to animate the dynamic motion of skin micro wrinkles under facial expression deformation. Since wrinkles are formed as a reservoir of skin for stretching, our model only deforms wrinkles that are perpendicular to the stress axis. Specifically, those wrinkles become wider and shallower when stretched, and deeper and narrower when compressed. In contrast to previous methods that attempted to modify the neutral wrinkle displacement map, our approach is to modify the way wrinkles are constructed in the displacement map. To this end, we build upon a previous synthetic wrinkle generator that allows us to control the width and depth of individual wrinkles when generated on a per-frame basis. Furthermore, since constructing a displacement map per frame of animation is costly, we present a fast approximation approach using pre-computed displacement maps of wrinkles binned by stretch direction, which can be blended interactively in a shader. We compare both our high quality and fast methods with previous techniques for wrinkle animation and demonstrate that our work retains more realistic details.
  • Item
    Next Generation 3D Face Models
    (The Eurographics Association, 2024) Chandran, Prashanth; Yang, Lingchen; Mania, Katerina; Artusi, Alessandro
    Having a compact, expressive and artist friendly way to represent and manipulate human faces has been of prime interest to the visual effects community for the past several decades as face models play a very important role in many face capture workflows. In this short course, we go over the evolution of 3D face models used to model and animate facial identity and expression in the computer graphics community, and discuss how the recent emergence of deep face models is transforming this landscape by enabling new artistic choices. In this first installment, the course will take the audience through the evolution of face models, starting with simple blendshape models introduced in the 1980s; that continue to be extremely popular today, to recent deep shape models that utilize neural networks to represent and manipulate face shapes in an artist friendly fashion. As the course is meant to be beginner friendly, the course will commence with a quick introduction to non-neural parametric shape models starting with linear blendshape and morphable models. We will then switch focus to deep shape models, particularly those that offer intuitive control to artists. We will discuss multiple variants of such deep face models that i) allow semantic control, ii) are agnostic to the underlying topology of the manipulated shape, iii) provide the ability to explicitly model a sequence of 3D shapes or animations, and iv) allow for the simulation of physical effects. Applications that will be discussed include face shape synthesis, identity and expression interpolation, rig generation, performance retargeting, animation synthesis and more.
  • Item
    Improved Lighting Models for Facial Appearance Capture
    (The Eurographics Association, 2022) Xu, Yingyan; Riviere, Jérémy; Zoss, Gaspard; Chandran, Prashanth; Bradley, Derek; Gotardo, Paulo; Pelechano, Nuria; Vanderhaeghe, David
    Facial appearance capture techniques estimate geometry and reflectance properties of facial skin by performing a computationally intensive inverse rendering optimization in which one or more images are re-rendered a large number of times and compared to real images coming from multiple cameras. Due to the high computational burden, these techniques often make several simplifying assumptions to tame complexity and make the problem more tractable. For example, it is common to assume that the scene consists of only distant light sources, and ignore indirect bounces of light (on the surface and within the surface). Also, methods based on polarized lighting often simplify the light interaction with the surface and assume perfect separation of diffuse and specular reflectance. In this paper, we move in the opposite direction and demonstrate the impact on facial appearance capture quality when departing from these idealized conditions towards models that seek to more accurately represent the lighting, while at the same time minimally increasing computational burden. We compare the results obtained with a state-of-the-art appearance capture method [RGB*20], with and without our proposed improvements to the lighting model.