Müller, MeinardBaak, AndreasSeidel, Hans-PeterEitan Grinspun and Jessica Hodgins2016-02-182016-02-182009978-1-60558-610-61727-5288https://doi.org/10.1145/1599470.1599473In view of increasing collections of available 3D motion capture (mocap) data, the task of automatically annotating large sets of unstructured motion data is gaining in importance. In this paper, we present an efficient approach to label mocap data according to a given set of motion categories or classes, each specified by a suitable set of positive example motions. For each class, we derive a motion template that captures the consistent and variable aspects of a motion class in an explicit matrix representation. We then present a novel annotation procedure, where the unknown motion data is segmented and annotated by locally comparing it with the available motion templates. This procedure is supported by an efficient keyframe-based preprocessing step, which also significantly improves the annotation quality by eliminating false positive matches. As a further contribution, we introduce a genetic learning algorithm to automatically learn the necessary keyframes from the given example motions. For evaluation, we report on various experiments conducted on two freely available sets of motion capture data (CMU and HDM05).Information Storage and Retrieval [H.3.3]Information Search and RetrievalThree Dimensional Graphics and Realism [I.3.7]AnimationEfficient and Robust Annotation of Motion Capture Data10.1145/1599470.159947317-26