Cheema, Noshabahosseini, somayehSprenger, JanisHerrmann, ErikDu, HanFischer, KlausSlusallek, PhilippCignoni, Paolo and Miguel, Eder2019-05-052019-05-0520191017-4656https://doi.org/10.2312/egs.20191017https://diglib.eg.org:443/handle/10.2312/egs20191017Human motion capture data has been widely used in data-driven character animation. In order to generate realistic, naturallooking motions, most data-driven approaches require considerable efforts of pre-processing, including motion segmentation and annotation. Existing (semi-) automatic solutions either require hand-crafted features for motion segmentation or do not produce the semantic annotations required for motion synthesis and building large-scale motion databases. In addition, human labeled annotation data suffers from inter- and intra-labeler inconsistencies by design. We propose a semi-automatic framework for semantic segmentation of motion capture data based on supervised machine learning techniques. It first transforms a motion capture sequence into a ''motion image'' and applies a convolutional neural network for image segmentation. Dilated temporal convolutions enable the extraction of temporal information from a large receptive field. Our model outperforms two state-of-the-art models for action segmentation, as well as a popular network for sequence modeling. Most of all, our method is very robust under noisy and inaccurate training labels and thus can handle human errors during the labeling process.Computing methodologiesMotion processingMotion captureImage processingFine-Grained Semantic Segmentation of Motion Capture Data using Dilated Temporal Fully-Convolutional Networks10.2312/egs.2019101769-72