Search Results

Now showing 1 - 10 of 22
  • Item
    A Survey on Reinforcement Learning Methods in Character Animation
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Kwiatkowski, Ariel; Alvarado, Eduardo; Kalogeiton, Vicky; Liu, C. Karen; Pettré, Julien; Panne, Michiel van de; Cani, Marie-Paule; Meneveaux, Daniel; Patanè, Giuseppe
    Reinforcement Learning is an area of Machine Learning focused on how agents can be trained to make sequential decisions, and achieve a particular goal within an arbitrary environment. While learning, they repeatedly take actions based on their observation of the environment, and receive appropriate rewards which define the objective. This experience is then used to progressively improve the policy controlling the agent's behavior, typically represented by a neural network. This trained module can then be reused for similar problems, which makes this approach promising for the animation of autonomous, yet reactive characters in simulators, video games or virtual reality environments. This paper surveys the modern Deep Reinforcement Learning methods and discusses their possible applications in Character Animation, from skeletal control of a single, physically-based character to navigation controllers for individual agents and virtual crowds. It also describes the practical side of training DRL systems, comparing the different frameworks available to build such agents.
  • Item
    Film Directing for Computer Games and Animation
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Ronfard, Rémi; Bühler, Katja and Rushmeier, Holly
    Over the last forty years, researchers in computer graphics have proposed a large variety of theoretical models and computer implementations of a virtual film director, capable of creating movies from minimal input such as a screenplay or storyboard. The underlying film directing techniques are also in high demand to assist and automate the generation of movies in computer games and animation. The goal of this survey is to characterize the spectrum of applications that require film directing, to present a historical and up-to-date summary of research in algorithmic film directing, and to identify promising avenues and hot topics for future research.
  • Item
    Presenting a Deep Motion Blending Approach for Simulating Natural Reach Motions
    (The Eurographics Association, 2018) Gaisbauer, Felix; Froehlich, Philipp; Lehwald, Jannes; Agethen, Philipp; Rukzio, Enrico; Jain, Eakta and Kosinka, Jirí
    Motion blending and character animation systems are widely used in different domains such as gaming or simulation within production industries. Most of the established approaches are based on motion blending techniques. These approaches provide natural motions within common scenarios while inducing low computational costs. However, with increasing amount of influence parameters and constraints such as collision-avoidance, they increasingly fail or require a vast amount of time to meet these requirements. With ongoing progress in artificial intelligence and neural networks, recent works present deep learning based approaches for motion synthesis, which offer great potential for modeling natural motions, while considering heterogeneous influence factors. In this paper, we propose a novel deep blending approach to simulate non-cyclical natural reach motions based on an extension of phase functioned deep neural networks.
  • Item
    Introducing a Modular Concept for Exchanging Character Animation Approaches
    (The Eurographics Association, 2018) Gaisbauer, Felix; Agethen, Philipp; Bär, Thomas; Rukzio, Enrico; Jain, Eakta and Kosinka, Jirí
    Nowadays, motion synthesis and character animation systems are used in different domains ranging from gaming to medicine and production industries. In recent years, there has been a vast progress in terms of realistic character animation. In this context, motion-capture based animation systems are frequently used to generate natural motions. Other approaches use physics based simulation, statistical models or machine learning methods to generate realistic motions. These approaches are however tightly coupled with the development environment, thus inducing high porting efforts if being incorporated into different platforms. Currently, no standard exists which allows to exchange complex character animation approaches. A comprehensive simulation of complex scenarios utilizing these heterogeneous approaches is therefore not possible, yet. In a different domain than motion, the Functional Mock-up Interface standard has already solved this problem. Initially being tailored to industrial needs, the standards allows to exchange dynamic simulation approaches such as solvers for mechatronic components. We present a novel concept, extending this standard to couple arbitrary character animation approaches using a common interface.
  • Item
    A Comprehensive Review of Data-Driven Co-Speech Gesture Generation
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Nyatsanga, Simbarashe; Kucherenko, Taras; Ahuja, Chaitanya; Henter, Gustav Eje; Neff, Michael; Bousseau, Adrien; Theobalt, Christian
    Gestures that accompany speech are an essential part of natural and efficient embodied human communication. The automatic generation of such co-speech gestures is a long-standing problem in computer animation and is considered an enabling technology for creating believable characters in film, games, and virtual social spaces, as well as for interaction with social robots. The problem is made challenging by the idiosyncratic and non-periodic nature of human co-speech gesture motion, and by the great diversity of communicative functions that gestures encompass. The field of gesture generation has seen surging interest in the last few years, owing to the emergence of more and larger datasets of human gesture motion, combined with strides in deep-learning-based generative models that benefit from the growing availability of data. This review article summarizes co-speech gesture generation research, with a particular focus on deep generative models. First, we articulate the theory describing human gesticulation and how it complements speech. Next, we briefly discuss rule-based and classical statistical gesture synthesis, before delving into deep learning approaches. We employ the choice of input modalities as an organizing principle, examining systems that generate gestures from audio, text and non-linguistic input. Concurrent with the exposition of deep learning approaches, we chronicle the evolution of the related training data sets in terms of size, diversity, motion quality, and collection method (e.g., optical motion capture or pose estimation from video). Finally, we identify key research challenges in gesture generation, including data availability and quality; producing human-like motion; grounding the gesture in the co-occurring speech in interaction with other speakers, and in the environment; performing gesture evaluation; and integration of gesture synthesis into applications. We highlight recent approaches to tackling the various key challenges, as well as the limitations of these approaches, and point toward areas of future development.
  • Item
    Perceptual Characteristics by Motion Style Category
    (The Eurographics Association, 2019) Kim, Hye Ji; Lee, Sung-Hee; Cignoni, Paolo and Miguel, Eder
    Motion style is important as it characterizes a motion by expressing the context of the motion such as emotion and personality. Yet, the perception and interpretation of motion styles is subjective and may vary greatly from person to person. This paper investigates the perceptual characteristics of motion styles for a wide range of styles. After categorizing the motion styles, we perform user studies to examine the diversity of interpretations of motion styles and the association level between style motions and their corresponding text descriptions. Our study shows that motion styles have different interpretation diversity and association level according to their categories. We discuss the implications of these findings and recommend a method of labeling or describing motion styles.
  • Item
    Splash in a Flash: Sharpness-aware Minimization for Efficient Liquid Splash Simulation
    (The Eurographics Association, 2022) Jetly, Vishrut; Ibayashi, Hikaru; Nakano, Aiichiro; Sauvage, Basile; Hasic-Telalovic, Jasminka
    We present sharpness-aware minimization (SAM) for fluid dynamics which can efficiently learn the plausible dynamics of liquid splashes. Due to its ability to achieve robust and generalizing solutions, SAM efficiently converges to a parameter set that predicts plausible dynamics of elusive liquid splashes. Our training scheme requires 6 times smaller number of epochs to converge and, 4 times shorter wall-clock time. Our result shows that sharpness of loss function has a close connection to the plausibility of fluid dynamics and suggests further applicability of SAM to machine learning based fluid simulation.
  • Item
    Stroke based Painterly Inbetweening
    (The Eurographics Association, 2022) Barroso, Nicolas; Fondevilla, Amélie; Vanderhaeghe, David; Sauvage, Basile; Hasic-Telalovic, Jasminka
    Creating a 2D animation with visible strokes is a tedious and time consuming task for an artist. Computer aided animation usually focus on cartoon stylized rendering, or is built from an automatic process as 3D animations stylization, loosing the painterly look and feel of hand made animation. We propose to simplify the creation of stroke-based animations: from a set of key frames, our methods automatically generates intermediate frames to depict the animation. Each intermediate frame looks as it could have been drawn by an artist, using the same high level stroke based representation as key frame, and in succession they display the subtle temporal incoherence usually found in hand-made animations.
  • Item
    Neural Motion Compression with Frequency-adaptive Fourier Feature Network
    (The Eurographics Association, 2022) Tojo, Kenji; Chen, Yifei; Umetani, Nobuyuki; Pelechano, Nuria; Vanderhaeghe, David
    We present a neural-network-based compression method to alleviate the storage cost of motion capture data. Human motions such as locomotion, often consist of periodic movements. We leverage this periodicity by applying Fourier features to a multilayered perceptron network. Our novel algorithm finds a set of Fourier feature frequencies based on the discrete cosine transformation (DCT) of motion. During training, we incrementally added a dominant frequency of the DCT to a current set of Fourier feature frequencies until a given quality threshold was satisfied. We conducted an experiment using CMU motion dataset, and the results suggest that our method achieves overall high compression ratio while maintaining its quality.
  • Item
    Controllable Caustic Animation Using Vector Fields
    (The Eurographics Association, 2020) Rojo, Irene Baeza; Gross, Markus; Günther, Tobias; Wilkie, Alexander and Banterle, Francesco
    In movie production, lighting is commonly used to redirect attention or to set the mood in a scene. The detailed editing of complex lighting phenomena, however, is as tedious as it is important, especially with dynamic lights or when light is a relevant story element. In this paper, we propose a new method to create caustic animations, which are controllable through constraints drawn by the user. Our method blends caustics into a specified target image by treating photons as particles that move in a divergence-free fluid, an irrotational vector field or a linear combination of the two. Once described as a flow, additional user constraints are easily added, e.g., to direct the flow, create boundaries or add synthetic turbulence, which offers new ways to redirect and control light. The corresponding vector field is computed by fitting a stream function and a scalar potential per time step, for which constraints are described in a quadratic energy that we minimize as a linear least squares problem. Finally, photons are placed at their new positions back into the scene and are rendered with progressive photon mapping.