Search Results

Now showing 1 - 10 of 66
  • Item
    Learning a Style Space for Interactive Line Drawing Synthesis from Animated 3D Models
    (The Eurographics Association, 2022) Wang, Zeyu; Wang, Tuanfeng Y.; Dorsey, Julie; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-Tak
    Most non-photorealistic rendering (NPR) methods for line drawing synthesis operate on a static shape. They are not tailored to process animated 3D models due to extensive per-frame parameter tuning needed to achieve the intended look and natural transition. This paper introduces a framework for interactive line drawing synthesis from animated 3D models based on a learned style space for drawing representation and interpolation. We refer to style as the relationship between stroke placement in a line drawing and its corresponding geometric properties. Starting from a given sequence of an animated 3D character, a user creates drawings for a set of keyframes. Our system embeds the raster drawings into a latent style space after they are disentangled from the underlying geometry. By traversing the latent space, our system enables a smooth transition between the input keyframes. The user may also edit, add, or remove the keyframes interactively, similar to a typical keyframe-based workflow. We implement our system with deep neural networks trained on synthetic line drawings produced by a combination of NPR methods. Our drawing-specific supervision and optimization-based embedding mechanism allow generalization from NPR line drawings to user-created drawings during run time. Experiments show that our approach generates high-quality line drawing animations while allowing interactive control of the drawing style across frames.
  • Item
    Fully Convolutional Graph Neural Networks for Parametric Virtual Try-On
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Vidaurre, Raquel; Santesteban, Igor; Garces, Elena; Casas, Dan; Bender, Jan and Popa, Tiberiu
    We present a learning-based approach for virtual try-on applications based on a fully convolutional graph neural network. In contrast to existing data-driven models, which are trained for a specific garment or mesh topology, our fully convolutional model can cope with a large family of garments, represented as parametric predefined 2D panels with arbitrary mesh topology, including long dresses, shirts, and tight tops. Under the hood, our novel geometric deep learning approach learns to drape 3D garments by decoupling the three different sources of deformations that condition the fit of clothing: garment type, target body shape, and material. Specifically, we first learn a regressor that predicts the 3D drape of the input parametric garment when worn by a mean body shape. Then, after a mesh topology optimization step where we generate a sufficient level of detail for the input garment type, we further deform the mesh to reproduce deformations caused by the target body shape. Finally, we predict fine-scale details such as wrinkles that depend mostly on the garment material. We qualitatively and quantitatively demonstrate that our fully convolutional approach outperforms existing methods in terms of generalization capabilities and memory requirements, and therefore it opens the door to more general learning-based models for virtual try-on applications.
  • Item
    Probabilistic Character Motion Synthesis using a Hierarchical Deep Latent Variable Model
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Ghorbani, Saeed; Wloka, Calden; Etemad, Ali; Brubaker, Marcus A.; Troje, Nikolaus F.; Bender, Jan and Popa, Tiberiu
    We present a probabilistic framework to generate character animations based on weak control signals, such that the synthesized motions are realistic while retaining the stochastic nature of human movement. The proposed architecture, which is designed as a hierarchical recurrent model, maps each sub-sequence of motions into a stochastic latent code using a variational autoencoder extended over the temporal domain. We also propose an objective function which respects the impact of each joint on the pose and compares the joint angles based on angular distance. We use two novel quantitative protocols and human qualitative assessment to demonstrate the ability of our model to generate convincing and diverse periodic and non-periodic motion sequences without the need for strong control signals.
  • Item
    Video-Driven Animation of Neural Head Avatars
    (The Eurographics Association, 2023) Paier, Wolfgang; Hinzer, Paul; Hilsmann, Anna; Eisert, Peter; Guthe, Michael; Grosch, Thorsten
    We present a new approach for video-driven animation of high-quality neural 3D head models, addressing the challenge of person-independent animation from video input. Typically, high-quality generative models are learned for specific individuals from multi-view video footage, resulting in person-specific latent representations that drive the generation process. In order to achieve person-independent animation from video input, we introduce an LSTM-based animation network capable of translating person-independent expression features into personalized animation parameters of person-specific 3D head models. Our approach combines the advantages of personalized head models (high quality and realism) with the convenience of video-driven animation employing multi-person facial performance capture.We demonstrate the effectiveness of our approach on synthesized animations with high quality based on different source videos as well as an ablation study.
  • Item
    A Survey on Reinforcement Learning Methods in Character Animation
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Kwiatkowski, Ariel; Alvarado, Eduardo; Kalogeiton, Vicky; Liu, C. Karen; Pettré, Julien; Panne, Michiel van de; Cani, Marie-Paule; Meneveaux, Daniel; Patanè, Giuseppe
    Reinforcement Learning is an area of Machine Learning focused on how agents can be trained to make sequential decisions, and achieve a particular goal within an arbitrary environment. While learning, they repeatedly take actions based on their observation of the environment, and receive appropriate rewards which define the objective. This experience is then used to progressively improve the policy controlling the agent's behavior, typically represented by a neural network. This trained module can then be reused for similar problems, which makes this approach promising for the animation of autonomous, yet reactive characters in simulators, video games or virtual reality environments. This paper surveys the modern Deep Reinforcement Learning methods and discusses their possible applications in Character Animation, from skeletal control of a single, physically-based character to navigation controllers for individual agents and virtual crowds. It also describes the practical side of training DRL systems, comparing the different frameworks available to build such agents.
  • Item
    Efficient Interpolation of Rough Line Drawings
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Chen, Jiazhou; Zhu, Xinding; Even, Melvin; Basset, Jean; Bénard, Pierre; Barla, Pascal; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.
    In traditional 2D animation, sketches drawn at distant keyframes are used to design motion, yet it would be far too laborintensive to draw all the inbetween frames to fully visualize that motion. We propose a novel efficient interpolation algorithm that generates these intermediate frames in the artist's drawing style. Starting from a set of registered rough vector drawings, we first generate a large number of candidate strokes during a pre-process, and then, at each intermediate frame, we select the subset of those that appropriately conveys the underlying interpolated motion, interpolates the stroke distributions of the key drawings, and introduces a minimum amount of temporal artifacts. In addition, we propose quantitative error metrics to objectively evaluate different stroke selection strategies. We demonstrate the potential of our method on various animations and drawing styles, and show its superiority over competing raster- and vector-based methods.
  • Item
    Film Directing for Computer Games and Animation
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Ronfard, Rémi; Bühler, Katja and Rushmeier, Holly
    Over the last forty years, researchers in computer graphics have proposed a large variety of theoretical models and computer implementations of a virtual film director, capable of creating movies from minimal input such as a screenplay or storyboard. The underlying film directing techniques are also in high demand to assist and automate the generation of movies in computer games and animation. The goal of this survey is to characterize the spectrum of applications that require film directing, to present a historical and up-to-date summary of research in algorithmic film directing, and to identify promising avenues and hot topics for future research.
  • Item
    A Simple Surface Tracking Method for Physically-Based 3D Water Simulations
    (The Eurographics Association, 2021) Amador, G.; Gomes, A.; Silva, F. and Gutierrez, D. and Rodríguez, J. and Figueiredo, M.
    Water simulation, and more generically fluid simulation, is an important research topic in computer graphics. In 3D Eulerian Navier-Stokes-based water simulations, surface tracking and rendering are two delicate problems. The existing solutions to these problems (i.e., implicit surfaces-based approaches, height-fields, ray-tracing), are either to computationally intensive for real-time scenarios, or present bulge water surfaces (i.e., blobby water surfaces). In this paper, we propose a novel tracking algorithm for rendering water surfaces. Instead of tracking the flow of water using either level sets or height-fields, each cell of an 3D grid density value is directly measured in order to determine if it is either water, air, or water-air contact surface. The information in each cell is later used for the water surface splat rendering, using OpenGL vertex buffer objects.
  • Item
    Linear Time Stable PD Controllers for Physics-based Character Animation
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Yin, Zhiqi; Yin, KangKang; Bender, Jan and Popa, Tiberiu
    In physics-based character animation, Proportional-Derivative (PD) controllers are commonly used for tracking reference motions in motor control tasks. Stable PD (SPD) controllers significantly improve the numerical stability of traditional PD controllers and support large gains and large integration time steps during simulation [TLT11]. For an articulated rigid body system with n degrees of freedom, all SPD implementations to date, however, use an O(n3) dense matrix factorization based method. In this paper, we propose a linear time algorithm for SPD computation, which is based on Featherstone's forward dynamics formulation for articulated rigid body systems in generalized coordinates [Fea14]. We demonstrate the performance advantage of our algorithm by comparing with both the conventional dense matrix factorization based method and an alternative sparse matrix factorization based method.We show that the proposed algorithm provides superior stability when controlling complex models at large time steps. We further demonstrate that our algorithm can improve the learning speed and quality of a Deep Reinforcement Learning (DRL) system for physics-based character animation.
  • Item
    Synthesizing Character Animation with Smoothly Decomposed Motion Layers
    (© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Eom, Haegwang; Choi, Byungkuk; Cho, Kyungmin; Jung, Sunjin; Hong, Seokpyo; Noh, Junyong; Benes, Bedrich and Hauser, Helwig
    The processing of captured motion is an essential task for undertaking the synthesis of high‐quality character animation. The motion decomposition techniques investigated in prior work extract meaningful motion primitives that help to facilitate this process. Carefully selected motion primitives can play a major role in various motion‐synthesis tasks, such as interpolation, blending, warping, editing or the generation of new motions. Unfortunately, for a complex character motion, finding generic motion primitives by decomposition is an intractable problem due to the compound nature of the behaviours of such characters. Additionally, decomposed motion primitives tend to be too limited for the chosen model to cover a broad range of motion‐synthesis tasks. To address these challenges, we propose a generative motion decomposition framework in which the decomposed motion primitives are applicable to a wide range of motion‐synthesis tasks. Technically, the input motion is smoothly decomposed into three motion layers. These are base‐level motion, a layer with controllable motion displacements and a layer with high‐frequency residuals. The final motion can easily be synthesized simply by changing a single user parameter that is linked to the layer of controllable motion displacements or by imposing suitable temporal correspondences to the decomposition framework. Our experiments show that this decomposition provides a great deal of flexibility in several motion synthesis scenarios: denoising, style modulation, upsampling and time warping.