EG 2019 - Short Papers
Permanent URI for this collection
Browse
Browsing EG 2019 - Short Papers by Author "Fischer, Klaus"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Fine-Grained Semantic Segmentation of Motion Capture Data using Dilated Temporal Fully-Convolutional Networks(The Eurographics Association, 2019) Cheema, Noshaba; hosseini, somayeh; Sprenger, Janis; Herrmann, Erik; Du, Han; Fischer, Klaus; Slusallek, Philipp; Cignoni, Paolo and Miguel, EderHuman motion capture data has been widely used in data-driven character animation. In order to generate realistic, naturallooking motions, most data-driven approaches require considerable efforts of pre-processing, including motion segmentation and annotation. Existing (semi-) automatic solutions either require hand-crafted features for motion segmentation or do not produce the semantic annotations required for motion synthesis and building large-scale motion databases. In addition, human labeled annotation data suffers from inter- and intra-labeler inconsistencies by design. We propose a semi-automatic framework for semantic segmentation of motion capture data based on supervised machine learning techniques. It first transforms a motion capture sequence into a ''motion image'' and applies a convolutional neural network for image segmentation. Dilated temporal convolutions enable the extraction of temporal information from a large receptive field. Our model outperforms two state-of-the-art models for action segmentation, as well as a popular network for sequence modeling. Most of all, our method is very robust under noisy and inaccurate training labels and thus can handle human errors during the labeling process.Item Stylistic Locomotion Modeling with Conditional Variational Autoencoder(The Eurographics Association, 2019) Du, Han; Herrmann, Erik; Sprenger, Janis; Cheema, Noshaba; hosseini, somayeh; Fischer, Klaus; Slusallek, Philipp; Cignoni, Paolo and Miguel, EderWe propose a novel approach to create generative models for distinctive stylistic locomotion synthesis. The approach is inspired by the observation that human styles can be easily distinguished from a few examples. However, learning a generative model for natural human motions which display huge amounts of variations and randomness would require a lot of training data. Furthermore, it would require considerable efforts to create such a large motion database for each style. We propose a generative model to combine the large variation in a neutral motion database and style information from a limited number of examples. We formulate the stylistic motion modeling task as a conditional distribution learning problem. Style transfer is implicitly applied during the model learning process. A conditional variational autoencoder (CVAE) is applied to learn the distribution and stylistic examples are used as constraints. We demonstrate that our approach can generate any number of natural-looking human motions with a similar style to the target given a few style examples and a neutral motion database.