Seo, HyewonCordier, FredericEitan Grinspun and Bernd Bickel and Yoshinori Dobashi2016-10-112016-10-1120161467-8659https://doi.org/10.1111/cgf.13000https://diglib.eg.org:443/handle/10.1111/cgf13000This paper presents a new technique which makes use of deformation and motion properties between animated meshes for finding their spatial correspondences. Given a pair of animated meshes exhibiting a semantically similar motion, we compute a sparse set of feature points on each mesh and compute spatial correspondences among them so that points with similar motion behavior are put in correspondence. At the core of our technique is our new, dynamic feature descriptor named AnimHOG, which encodes local deformation characteristics. AnimHOG is ob-tained by computing the gradient of a scalar field inside the spatiotemporal neighborhood of a point of interest, where the scalar values are obtained from the deformation characteristic associated with each vertex and at each frame. The final matching has been formulated as a discreet optimization problem that finds the matching of each feature point on the source mesh so that the descriptor similarity between the corresponding feature pairs as well as compatibility and consistency as measured across the pairs of correspondences are maximized. Consequently, reliable correspondences can be found even among the meshes of very different shape, as long as their motions are similar. We demonstrate the performance of our technique by showing the good quality of matching results we obtained on a number of animated mesh pairs.I.3.5 [Computer Graphics]Computational Geometry and Object ModelingGeometric algorithmslanguagesand systemsI.3.6 [Computer Graphics]Methodology and TechniquesGraphics data structures and data typesI.3.7 [Computer Graphics]Three Dimensional Graphics and RealismAnimationSpatial Matching of Animated Meshes10.1111/cgf.1300021-32