39-Issue 2
Permanent URI for this collection
Browse
Browsing 39-Issue 2 by Author "Casas, Dan"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Modeling and Estimation of Nonlinear Skin Mechanics for Animated Avatars(The Eurographics Association and John Wiley & Sons Ltd., 2020) Romero, Cristian; Otaduy, Miguel A.; Casas, Dan; Pérez, Jesús; Panozzo, Daniele and Assarsson, UlfData-driven models of human avatars have shown very accurate representations of static poses with soft-tissue deformations. However they are not yet capable of precisely representing very nonlinear deformations and highly dynamic effects. Nonlinear skin mechanics are essential for a realistic depiction of animated avatars interacting with the environment, but controlling physics-only solutions often results in a very complex parameterization task. In this work, we propose a hybrid model in which the soft-tissue deformation of animated avatars is built as a combination of a data-driven statistical model, which kinematically drives the animation, an FEM mechanical simulation. Our key contribution is the definition of deformation mechanics in a reference pose space by inverse skinning of the statistical model. This way, we retain as much as possible of the accurate static data-driven deformation and use a custom anisotropic nonlinear material to accurately represent skin dynamics. Model parameters including the heterogeneous distribution of skin thickness and material properties are automatically optimized from 4D captures of humans showing soft-tissue deformations.Item SoftSMPL: Data-driven Modeling of Nonlinear Soft-tissue Dynamics for Parametric Humans(The Eurographics Association and John Wiley & Sons Ltd., 2020) Santesteban, Igor; Garces, Elena; Otaduy, Miguel A.; Casas, Dan; Panozzo, Daniele and Assarsson, UlfWe present SoftSMPL, a learning-based method to model realistic soft-tissue dynamics as a function of body shape and motion. Datasets to learn such task are scarce and expensive to generate, which makes training models prone to overfitting. At the core of our method there are three key contributions that enable us to model highly realistic dynamics and better generalization capabilities than state-of-the-art methods, while training on the same data. First, a novel motion descriptor that disentangles the standard pose representation by removing subject-specific features; second, a neural-network-based recurrent regressor that generalizes to unseen shapes and motions; and third, a highly efficient nonlinear deformation subspace capable of representing soft-tissue deformations of arbitrary shapes. We demonstrate qualitative and quantitative improvements over existing methods and, additionally, we show the robustness of our method on a variety of motion capture databases.