Now showing items 1-5 of 5

    • Learning Physics with a Hierarchical Graph Network 

      Chentanez, Nuttapong; Jeschke, Stefan; Müller, Matthias; Macklin, Miles (The Eurographics Association and John Wiley & Sons Ltd., 2022)
      We propose a hierarchical graph for learning physics and a novel way to handle obstacles. The finest level of the graph consist of the particles itself. Coarser levels consist of the cells of sparse grids with successively ...
    • Monocular Facial Performance Capture Via Deep Expression Matching 

      Bailey, Stephen W.; Riviere, Jérémy; Mikkelsen, Morten; O'Brien, James F. (The Eurographics Association and John Wiley & Sons Ltd., 2022)
      Facial performance capture is the process of automatically animating a digital face according to a captured performance of an actor. Recent developments in this area have focused on high-quality results using expensive ...
    • PERGAMO: Personalized 3D Garments from Monocular Video 

      Casado-Elvira, Andrés; Comino Trinidad, Marc; Casas, Dan (The Eurographics Association and John Wiley & Sons Ltd., 2022)
      Clothing plays a fundamental role in digital humans. Current approaches to animate 3D garments are mostly based on realistic physics simulation, however, they typically suffer from two main issues: high computational ...
    • UnderPressure: Deep Learning for Foot Contact Detection, Ground Reaction Force Estimation and Footskate Cleanup 

      Mourot, Lucas; Hoyet, Ludovic; Clerc, François Le; Hellier, Pierre (The Eurographics Association and John Wiley & Sons Ltd., 2022)
      Human motion synthesis and editing are essential to many applications like video games, virtual reality, and film postproduction. However, they often introduce artefacts in motion capture data, which can be detrimental to ...
    • Voice2Face: Audio-driven Facial and Tongue Rig Animations with cVAEs 

      Villanueva Aylagas, Monica; Anadon Leon, Hector; Teye, Mattias; Tollmar, Konrad (The Eurographics Association and John Wiley & Sons Ltd., 2022)
      We present Voice2Face: a Deep Learning model that generates face and tongue animations directly from recorded speech. Our approach consists of two steps: a conditional Variational Autoencoder generates mesh animations from ...