• Login
    View Item 
    •   Eurographics DL Home
    • Eurographics Conferences
    • EG2019
    • EG 2019 - Short Papers
    • View Item
    •   Eurographics DL Home
    • Eurographics Conferences
    • EG2019
    • EG 2019 - Short Papers
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Fine-Grained Semantic Segmentation of Motion Capture Data using Dilated Temporal Fully-Convolutional Networks

    Thumbnail
    View/Open
    069-072.pdf (670.7Kb)
    segmented_images.zip (501.4Mb)
    segmented_motion_examples_(videos).zip (137.0Mb)
    Date
    2019
    Author
    Cheema, Noshaba ORCID
    hosseini, somayeh ORCID
    Sprenger, Janis ORCID
    Herrmann, Erik ORCID
    Du, Han ORCID
    Fischer, Klaus ORCID
    Slusallek, Philipp ORCID
    Pay-Per-View via TIB Hannover:

    Try if this item/paper is available.

    Metadata
    Show full item record
    Abstract
    Human motion capture data has been widely used in data-driven character animation. In order to generate realistic, naturallooking motions, most data-driven approaches require considerable efforts of pre-processing, including motion segmentation and annotation. Existing (semi-) automatic solutions either require hand-crafted features for motion segmentation or do not produce the semantic annotations required for motion synthesis and building large-scale motion databases. In addition, human labeled annotation data suffers from inter- and intra-labeler inconsistencies by design. We propose a semi-automatic framework for semantic segmentation of motion capture data based on supervised machine learning techniques. It first transforms a motion capture sequence into a ''motion image'' and applies a convolutional neural network for image segmentation. Dilated temporal convolutions enable the extraction of temporal information from a large receptive field. Our model outperforms two state-of-the-art models for action segmentation, as well as a popular network for sequence modeling. Most of all, our method is very robust under noisy and inaccurate training labels and thus can handle human errors during the labeling process.
    BibTeX
    @inproceedings {s.20191017,
    booktitle = {Eurographics 2019 - Short Papers},
    editor = {Cignoni, Paolo and Miguel, Eder},
    title = {{Fine-Grained Semantic Segmentation of Motion Capture Data using Dilated Temporal Fully-Convolutional Networks}},
    author = {Cheema, Noshaba and hosseini, somayeh and Sprenger, Janis and Herrmann, Erik and Du, Han and Fischer, Klaus and Slusallek, Philipp},
    year = {2019},
    publisher = {The Eurographics Association},
    ISSN = {1017-4656},
    DOI = {10.2312/egs.20191017}
    }
    URI
    https://doi.org/10.2312/egs.20191017
    https://diglib.eg.org:443/handle/10.2312/egs20191017
    Collections
    • EG 2019 - Short Papers

    Eurographics Association copyright © 2013 - 2020 
    Send Feedback | Contact - Imprint | Data Privacy Policy | Disable Google Analytics
    Theme by @mire NV
    System hosted at  Graz University of Technology.
    TUGFhA
     

     

    Browse

    All of Eurographics DLCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    LoginRegister

    BibTeX | TOC

    Create BibTeX Create Table of Contents

    Eurographics Association copyright © 2013 - 2020 
    Send Feedback | Contact - Imprint | Data Privacy Policy | Disable Google Analytics
    Theme by @mire NV
    System hosted at  Graz University of Technology.
    TUGFhA