23 results
Search Results
Now showing 1 - 10 of 23
Item Fully Convolutional Graph Neural Networks for Parametric Virtual Try-On(The Eurographics Association and John Wiley & Sons Ltd., 2020) Vidaurre, Raquel; Santesteban, Igor; Garces, Elena; Casas, Dan; Bender, Jan and Popa, TiberiuWe present a learning-based approach for virtual try-on applications based on a fully convolutional graph neural network. In contrast to existing data-driven models, which are trained for a specific garment or mesh topology, our fully convolutional model can cope with a large family of garments, represented as parametric predefined 2D panels with arbitrary mesh topology, including long dresses, shirts, and tight tops. Under the hood, our novel geometric deep learning approach learns to drape 3D garments by decoupling the three different sources of deformations that condition the fit of clothing: garment type, target body shape, and material. Specifically, we first learn a regressor that predicts the 3D drape of the input parametric garment when worn by a mean body shape. Then, after a mesh topology optimization step where we generate a sufficient level of detail for the input garment type, we further deform the mesh to reproduce deformations caused by the target body shape. Finally, we predict fine-scale details such as wrinkles that depend mostly on the garment material. We qualitatively and quantitatively demonstrate that our fully convolutional approach outperforms existing methods in terms of generalization capabilities and memory requirements, and therefore it opens the door to more general learning-based models for virtual try-on applications.Item Probabilistic Character Motion Synthesis using a Hierarchical Deep Latent Variable Model(The Eurographics Association and John Wiley & Sons Ltd., 2020) Ghorbani, Saeed; Wloka, Calden; Etemad, Ali; Brubaker, Marcus A.; Troje, Nikolaus F.; Bender, Jan and Popa, TiberiuWe present a probabilistic framework to generate character animations based on weak control signals, such that the synthesized motions are realistic while retaining the stochastic nature of human movement. The proposed architecture, which is designed as a hierarchical recurrent model, maps each sub-sequence of motions into a stochastic latent code using a variational autoencoder extended over the temporal domain. We also propose an objective function which respects the impact of each joint on the pose and compares the joint angles based on angular distance. We use two novel quantitative protocols and human qualitative assessment to demonstrate the ability of our model to generate convincing and diverse periodic and non-periodic motion sequences without the need for strong control signals.Item Latent Space Subdivision: Stable and Controllable Time Predictions for Fluid Flow(The Eurographics Association and John Wiley & Sons Ltd., 2020) Wiewel, Steffen; Kim, Byungsoo; Azevedo, Vinicius; Solenthaler, Barbara; Thuerey, Nils; Bender, Jan and Popa, TiberiuWe propose an end-to-end trained neural network architecture to robustly predict the complex dynamics of fluid flows with high temporal stability. We focus on single-phase smoke simulations in 2D and 3D based on the incompressible Navier-Stokes (NS) equations, which are relevant for a wide range of practical problems. To achieve stable predictions for long-term flow sequences with linear execution times, a convolutional neural network (CNN) is trained for spatial compression in combination with a temporal prediction network that consists of stacked Long Short-Term Memory (LSTM) layers. Our core contribution is a novel latent space subdivision (LSS) to separate the respective input quantities into individual parts of the encoded latent space domain. As a result, this allows to distinctively alter the encoded quantities without interfering with the remaining latent space values and hence maximizes external control. By selectively overwriting parts of the predicted latent space points, our proposed method is capable to robustly predict long-term sequences of complex physics problems, like the flow of fluids. In addition, we highlight the benefits of a recurrent training on the latent space creation, which is performed by the spatial compression network. Furthermore, we thoroughly evaluate and discuss several different components of our method.Item Primal/Dual Descent Methods for Dynamics(The Eurographics Association and John Wiley & Sons Ltd., 2020) Macklin, Miles; Erleben, Kenny; Müller, Matthias; Chentanez, Nuttapong; Jeschke, Stefan; Kim, Tae-Yong; Bender, Jan and Popa, TiberiuWe examine the relationship between primal, or force-based, and dual, or constraint-based formulations of dynamics. Variational frameworks such as Projective Dynamics have proved popular for deformable simulation, however they have not been adopted for contact-rich scenarios such as rigid body simulation. We propose a new preconditioned frictional contact solver that is compatible with existing primal optimization methods, and competitive with complementarity-based approaches. Our relaxed primal model generates improved contact force distributions when compared to dual methods, and has the advantage of being differentiable, making it well-suited for trajectory optimization. We derive both primal and dual methods from a common variational point of view, and present a comprehensive numerical analysis of both methods with respect to conditioning. We demonstrate our method on scenarios including rigid body contact, deformable simulation, and robotic manipulation.Item Intuitive Facial Animation Editing Based On A Generative RNN Framework(The Eurographics Association and John Wiley & Sons Ltd., 2020) Berson, Eloïse; Soladié, Catherine; Stoiber, Nicolas; Bender, Jan and Popa, TiberiuFor the last decades, the concern of producing convincing facial animation has garnered great interest, that has only been accelerating with the recent explosion of 3D content in both entertainment and professional activities. The use of motion capture and retargeting has arguably become the dominant solution to address this demand. Yet, despite high level of quality and automation performance-based animation pipelines still require manual cleaning and editing to refine raw results, which is a time- and skill-demanding process. In this paper, we look to leverage machine learning to make facial animation editing faster and more accessible to non-experts. Inspired by recent image inpainting methods, we design a generative recurrent neural network that generates realistic motion into designated segments of an existing facial animation, optionally following userprovided guiding constraints. Our system handles different supervised or unsupervised editing scenarios such as motion filling during occlusions, expression corrections, semantic content modifications, and noise filtering. We demonstrate the usability of our system on several animation editing use cases.Item Detailed Rigid Body Simulation with Extended Position Based Dynamics(The Eurographics Association and John Wiley & Sons Ltd., 2020) Müller, Matthias; Macklin, Miles; Chentanez, Nuttapong; Jeschke, Stefan; Kim, Tae-Yong; Bender, Jan and Popa, TiberiuWe present a rigid body simulation method that can resolve small temporal and spatial details by using a quasi explicit integration scheme that is unconditionally stable. Traditional rigid body simulators linearize constraints because they operate on the velocity level or solve the equations of motion implicitly thereby freezing the constraint directions for multiple iterations. Our method always works with the most recent constraint directions. This allows us to trace high speed motion of objects colliding against curved geometry, to reduce the number of constraints, to increase the robustness of the simulation, and to simplify the formulation of the solver. In this paper we provide all the details to implement a fully fledged rigid body solver that handles contacts, a variety of joint types and the interaction with soft objects.Item Linear Time Stable PD Controllers for Physics-based Character Animation(The Eurographics Association and John Wiley & Sons Ltd., 2020) Yin, Zhiqi; Yin, KangKang; Bender, Jan and Popa, TiberiuIn physics-based character animation, Proportional-Derivative (PD) controllers are commonly used for tracking reference motions in motor control tasks. Stable PD (SPD) controllers significantly improve the numerical stability of traditional PD controllers and support large gains and large integration time steps during simulation [TLT11]. For an articulated rigid body system with n degrees of freedom, all SPD implementations to date, however, use an O(n3) dense matrix factorization based method. In this paper, we propose a linear time algorithm for SPD computation, which is based on Featherstone's forward dynamics formulation for articulated rigid body systems in generalized coordinates [Fea14]. We demonstrate the performance advantage of our algorithm by comparing with both the conventional dense matrix factorization based method and an alternative sparse matrix factorization based method.We show that the proposed algorithm provides superior stability when controlling complex models at large time steps. We further demonstrate that our algorithm can improve the learning speed and quality of a Deep Reinforcement Learning (DRL) system for physics-based character animation.Item Statistics-based Motion Synthesis for Social Conversations(The Eurographics Association and John Wiley & Sons Ltd., 2020) Yang, Yanzhe; Yang, Jimei; Hodgins, Jessica; Bender, Jan and Popa, TiberiuPlausible conversations among characters are required to generate the ambiance of social settings such as a restaurant, hotel lobby, or cocktail party. In this paper, we propose a motion synthesis technique that can rapidly generate animated motion for characters engaged in two-party conversations. Our system synthesizes gestures and other body motions for dyadic conversations that synchronize with novel input audio clips. Human conversations feature many different forms of coordination and synchronization. For example, speakers use hand gestures to emphasize important points, and listeners often nod in agreement or acknowledgment. To achieve the desired degree of realism, our method first constructs a motion graph that preserves the statistics of a database of recorded conversations performed by a pair of actors. This graph is then used to search for a motion sequence that respects three forms of audio-motion coordination in human conversations: coordination to phonemic clause, listener response, and partner's hesitation pause. We assess the quality of the generated animations through a user study that compares them to the originally recorded motion and evaluate the effects of each type of audio-motion coordination via ablation studies.Item A Pixel-Based Framework for Data-Driven Clothing(The Eurographics Association and John Wiley & Sons Ltd., 2020) Jin, Ning; Zhu, Yilin; Geng, Zhenglin; Fedkiw, Ron; Bender, Jan and Popa, TiberiuWe propose a novel approach to learning cloth deformation as a function of body pose, recasting the graph-like triangle mesh data structure into image-based data in order to leverage popular and well-developed convolutional neural networks (CNNs) in a two-dimensional Euclidean domain. Then, a three-dimensional animation of clothing is equivalent to a sequence of twodimensional RGB images driven/choreographed by time dependent joint angles. In order to reduce nonlinearity demands on the neural network, we utilize procedural skinning of the body surface to capture much of the rotation/deformation so that the RGB images only contain textures of displacement offsets from skin to clothing. Notably, we illustrate that our approach does not require accurate unclothed body shapes or robust skinning techniques. Additionally, we discuss how standard image based techniques such as image partitioning for higher resolution can readily be incorporated into our framework.Item A Bending Model for Nodal Discretizations of Yarn-Level Cloth(The Eurographics Association and John Wiley & Sons Ltd., 2020) Pizana, José María; Rodríguez, Alejandro; Cirio, Gabriel; Otaduy, Miguel A.; Bender, Jan and Popa, TiberiuTo deploy yarn-level cloth simulations in production environments, it is paramount to design very efficient implementations, which mitigate the cost of the extremely high resolution. To this end, nodal discretizations aligned with the regularity of the fabric structure provide an optimal setting for efficient GPU implementations. However, nodal discretizations complicate the design of robust and controllable bending. In this paper, we address this challenge, and propose a model of bending that is both robust and controllable, and employs only nodal degrees of freedom. We extract information of yarn and fabric orientation implicitly from the nodal degrees of freedom, with no need to augment the model explicitly. But most importantly, and unlike previous formulations that use implicit orientations, the computation of bending forces bears no overhead with respect to other nodal forces such as stretch. This is possible by tracking optimal orientations efficiently. We demonstrate the impact of our bending model in examples with controllable anisotropy, as well as ironing, wrinkling, and plasticity.
- «
- 1 (current)
- 2
- 3
- »