SCA 11: Eurographics/SIGGRAPH Symposium on Computer Animation
Permanent URI for this collection
Browse
Browsing SCA 11: Eurographics/SIGGRAPH Symposium on Computer Animation by Subject "Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Three-DimensionalGraphics and Realism-Animation"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Biomechanically-Inspired Motion Path Editing(The Eurographics Association, 2011) Lockwood, Noah; Singh, Karan; A. Bargteil and M. van de PanneWe present a system for interactive kinematic editing of motion paths and timing that employs various biomechanical observations to augment and restrict the edited motion. Realistic path manipulations are enforced by restricting user interaction to handles identified along a motion path using motion extrema. An as-rigid-as-possibledeformation technique modified specifically for use on motion paths is used to deform the path to satisfy the usermanipulated handle positions. After all motion poses have been adjusted to satisfy the new path, an automatic timewarping step modifies the timing of the new motion to preserve the timing qualities of the original motion.This timewarp is based on biomechanical heuristics relating velocity to stride length and path curvature, as well as the preservation of acceleration for ballistic motion. We show that our system can be used to quickly and easily modify a variety of locomotive motions, and can accurately reproduce recorded motions that were not used during the editing process.Item Content Retargeting Using Parameter-Parallel Facial Layers(The Eurographics Association, 2011) Kholgade, Natasha; Matthews, Iain; Sheikh, Yaser; A. Bargteil and M. van de PanneFacial motion retargeting approaches often transfer expressions by establishing correspondences between shared units of motion, such as action units, or spatial correspondences of landmarks between the source actor and target character faces. When the actor and character are structurally dissimilar, shared units of motion or spatiallandmarks may not exist, and subtle styles of performance may differ. We present a method to deconstruct the content of an actor's facial expression into three parameter-parallel layers using a composition function, transfer the content to equivalent parameter-parallel layers for the character, and reconstruct the character's expression using the same composition function. Our algorithm uses the same parameter-parallel layered model of facial expression for both the actor and character, separating the content of facial expressions into emotion, speech, and eye-blink layers. Facial motion in each layer is embedded in simplicial bases, each of which encodes semantically significant configurations of the face. We show the transfer of facial motion capture and video-based tracking of the eyes and mouth of an actor to a number of faces with dissimilar facial structure and expressive disposition.Item Element-Wise Mixed Implicit-Explicit Integration for Stable Dynamic Simulation of Deformable Objects(The Eurographics Association, 2011) Fierz, B.; Spillmann, J.; Harders, M.; A. Bargteil and M. van de PanneIn order to evolve a deformable object in time, the underlying equations of motion have to be numerically integrated. This is commonly done by employing either an explicit or an implicit integration scheme. While explicit methods are only stable for small time steps, implicit methods are unconditionally stable. In this paper, we present a novel methodology to combine explicit and implicit linear integration approaches, based on element-wise stabilityconsiderations. First, we detect the ill-shaped simulation elements which hinder the stable explicit integration of the element nodes as a pre-computation step. These nodes are then simulated implicitly, while the remaining parts of the mesh are explicitly integrated. As a consequence, larger integration time steps than in purely explicit methods are possible, while the computation time per step is smaller than in purely implicit integration. Duringmodifications such as cutting or fracturing, only newly created or modified elements need to be reevaluated, thus making the technique usable in real-time simulations. In addition, our method reduces problems due to numerical dissipation.Item Graph-based Fire Synthesis(The Eurographics Association, 2011) Zhang, Yubo; Correa, Carlos D.; Ma, Kwan-Liu; A. Bargteil and M. van de PanneWe present a novel graph-based data-driven technique for cost-effective fire modeling. This technique allows composing long animation sequences using a small number of short simulations. While traditional techniques such as motion graphs and motion blending work well for character motion synthesis, they cannot be trivially applied to fluids to produce results with physically consistent properties which are crucial to the visual appearance offluids. Motivated by the motion graph technique used in character animations, we introduce a new type of graph which can be applied to create various fire phenomena. Each graph node consists of a group of compact spatialtemporal flow pathlines instead of a set of volumetric state fields. Consequently, achieving smooth transitions between discontinuous graph nodes for modeling turbulent fires becomes feasible and computationally efficient.The synthesized particle flow results allow direct particle controls which is much more flexible than a full volumetric representation of the simulation output. The accompanying video shows the versatility and potential power of this new technique for synthesizing realtime complex fire at the quality comparable to production animations.Item Real-Time Classification of Dance Gesturesfrom Skeleton Animation(The Eurographics Association, 2011) Raptis, Michalis; Kirovski, Darko; Hoppe, Hugues; A. Bargteil and M. van de PanneWe present a real-time gesture classification system for skeletal wireframe motion. Its key components include an angular representation of the skeleton designed for recognition robustness under noisy input, a cascaded correlation-based classifier for multivariate time-series data, and a distance metric based on dynamic timewarping to evaluate the difference in motion between an acquired gesture and an oracle for the matching gesture. While the first and last tools are generic in nature and could be applied to any gesture-matching scenario, the classifier is conceived based on the assumption that the input motion adheres to a known, canonical time-base: a musical beat. On a benchmark comprising 28 gesture classes, hundreds of gesture instances recorded using the XBOX Kinect platform and performed by dozens of subjects for each gesture class, our classifier has an average accuracy of 96:9%, for approximately 4-second skeletal motion recordings. This accuracy is remarkable given the input noise from the real-time depth sensor.Item Spacetime Vertex Constraints for Dynamically-based Adaptation of Motion-Captured Animation(The Eurographics Association, 2011) O'Brien, C.; Dingliana, J.; Collins, S.; A. Bargteil and M. van de PanneWe present a novel technique for editing motion captured animation. Our iterative solver produces physicallyplausible adaptated animations that satisfy alterations in foot and hand contact placement with the animated character's surroundings. The technique uses a system of particles to represent the poses and mass distribution of the character at sampled frames of the animation. Constraints between the vertices within each frame enforce theskeletal structure, including joint limits. Novel constraints extending over vertices in several frames enforce the aggregate dynamics of the character, as well as features such as joint acceleration smoothness. We demonstrate adaptation of several animations to altered foot and hand placement.