SCA 12: Eurographics/SIGGRAPH Symposium on Computer Animation
Permanent URI for this collection
Browse
Browsing SCA 12: Eurographics/SIGGRAPH Symposium on Computer Animation by Subject "Computer Graphics [I.3.7]"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Controlling Liquids Using Meshes(The Eurographics Association, 2012) Raveendran, Karthik; Thuerey, Nils; Wojtan, Chris; Turk, Greg; Jehee Lee and Paul KryWe present an approach for artist-directed animation of liquids using multiple levels of control over the simulation, ranging from the overall tracking of desired shapes to highly detailed secondary effects such as dripping streams, separating sheets of fluid, surface waves and ripples. The first portion of our technique is a volume preserving morph that allows the animator to produce a plausible fluid-like motion from a sparse set of control meshes. By rasterizing the resulting control meshes onto the simulation grid, the mesh velocities act as boundary conditions during the projection step of the fluid simulation. We can then blend this motion together with uncontrolled fluid velocities to achieve a more relaxed control over the fluid that captures natural inertial effects. Our method can produce highly detailed liquid surfaces with control over sub-grid details by using a mesh-based surface tracker on top of a coarse grid-based fluid simulation. We can create ripples and waves on the fluid surface attracting the surface mesh to the control mesh with spring-like forces and also by running a wave simulation over the surface mesh. Our video results demonstrate how our control scheme can be used to create animated characters and shapes that are made of water.Item Interactive Steering of Mesh Animations(The Eurographics Association, 2012) Vögele, Anna; Hermann, Max; Krüger, Björn; Klein, Reinhard; Jehee Lee and Paul KryCreating geometrically detailed mesh animations is an involved and resource-intensive process in digital content creation. In this work we present a method to rapidly combine available sparse motion capture data with existing mesh sequences to produce a large variety of new animations. The key idea is to model shape changes correlated to the pose of the animated object via a part-based statistical shape model. We observe that compact linear models suffice for a segmentation into nearly rigid parts. The same segmentation further guides the parameterization of the pose which is learned in conjunction with the marker movement. Besides the inherent high geometric detail, further benefits of the presented method arise from its robustness against errors in segmentation and pose parameterization. Due to efficiency of both learning and synthesis phase, our model allows to interactively steer virtual avatars based on few markers extracted from video data or input devices like the Kinect sensor.Item Multi-linear Data-Driven Dynamic Hair Model with Efficient Hair-Body Collision Handling(The Eurographics Association, 2012) Guan, Peng; Sigal, Leonid; Reznitskaya, Valeria; Hodgins, Jessica K.; Jehee Lee and Paul KryWe present a data-driven method for learning hair models that enables the creation and animation of many interactive virtual characters in real-time (for gaming, character pre-visualization and design). Our model has a number of properties that make it appealing for interactive applications: (i) it preserves the key dynamic properties of physical simulation at a fraction of the computational cost, (ii) it gives the user continuous interactive control over the hair styles (e.g., lengths) and dynamics (e.g., softness) without requiring re-styling or re-simulation, (iii) it deals with hair-body collisions explicitly using optimization in the low-dimensional reduced space, (iv) it allows modeling of external phenomena (e.g., wind). Our method builds on the recent success of reduced models for clothing and fluid simulation, but extends them in a number of significant ways. We model motion of hair in a conditional reduced sub-space, where the hair basis vectors, which encode dynamics, are linear functions of userspecified hair parameters. We formulate collision handling as an optimization in this reduced sub-space using fast iterative least squares. We demonstrate our method by building dynamic, user-controlled models of hair styles.