SCA 03: Eurographics/SIGGRAPH Symposium on Computer Animation
Permanent URI for this collection
Browse
Browsing SCA 03: Eurographics/SIGGRAPH Symposium on Computer Animation by Issue Date
Now showing 1 - 20 of 38
Results Per Page
Sort Options
Item Visual Simulation of Ice Crystal Growth(The Eurographics Association, 2003) Kim, Theodore; Lin, Ming C.; D. Breen and M. LinThe beautiful, branching structure of ice is one of the most striking visual phenomena of the winter landscape. Yet there is little study about modeling this effect in computer graphics. In this paper, we present a novel approach for visual simulation of ice growth. We use a numerical simulation technique from computational physics, the "phase field method", and modify it to allow aesthetic manipulation of ice crystal growth. We present acceleration techniques to achieve interactive simulation performance, as well as a novel geometric sharpening algorithm that removes some of the smoothing artifacts from the implicit representation. We have successfully applied this approach to generate ice crystal growth on 3D object surfaces in several scenes.Item On Creating Animated Presentations(The Eurographics Association, 2003) Zongker, Douglas E.; Salesin, David H.; D. Breen and M. LinComputers are used to display visuals for millions of live presentations each day, and yet only the tiniest fraction of these make any real use of the powerful graphics hardware available on virtually all of today s machines. In this paper, we describe our efforts toward harnessing this power to create better types of presentations: presentations that include meaningful animation as well as at least a limited degree of interactivity. Our approach has been iterative, alternating between creating animated talks using available tools, then improving the tools to better support the kinds of talk we wanted to make. Through this cyclic design process, we have identified a set of common authoring paradigms that we believe a system for building animated presentations should support. We describe these paradigms and present the latest version of our script-based system for creating animated presentations, called SLITHY. We show several examples of actual animated talks that were created and given with versions of SLITHY, including one talk presented at SIGGRAPH 2000 and four talks presented at SIGGRAPH 2002. Finally, we describe a set of design principles that we have found useful for making good use of animation in presentation.Item A Practical Dynamics System(The Eurographics Association, 2003) Kacic-Alesic, Zoran; Nordenstam, Marcus; Bullock, David; D. Breen and M. LinWe present an effective production-proven dynamics system. It uses an explicit time differencing method that is efficient, reasonably accurate, conditionally stable, and above all simple to implement. We describe issues related to integration of physically based simulation techniques into an interactive animation system, present a high level description of the architecture of the system, report on techniques that work, and provide observations that may seem obvious, but only in retrospect. Applications include rigid and deformable body dynamics, particle dynamics, and at a basic level, hair and cloth simulation.Item Unsupervised Learning for Speech Motion Editing(The Eurographics Association, 2003) Cao, Yong; Faloutsos, Petros; Pighin, Frédéric; D. Breen and M. LinWe present a new method for editing speech related facial motions. Our method uses an unsupervised learning technique, Independent Component Analysis (ICA), to extract a set of meaningful parameters without any annotation of the data. With ICA, we are able to solve a blind source separation problem and describe the original data as a linear combination of two sources. One source captures content (speech) and the other captures style (emotion). By manipulating the independent components we can edit the motions in intuitive ways.Item Feel the 'Fabric': An Audio-Haptic Interface(The Eurographics Association, 2003) Huang, G.; Metaxas, D.; Govindaraj, M.; D. Breen and M. LinAn objective fabric modeling system should convey not only the visual but also the haptic and audio sensory feedbacks to remote/internet users via an audio-haptic interface. In this paper we develop a fabric surface property modeling system consisting of a stylus based fabric characteristic sound modeling, and an audio-haptic interface. By using a stylus, people can perceive fabrics surface roughness, friction, and softness though not as precisely as with their bare fingers. The audio-haptic interface is intended to simulate the case of "feeling a virtually fixed fabric via a rigid stylus" by using the PHANToM haptic interface. We develop a DFFT based correlation-restoration method to model the surface roughness and friction coefficient of a fabric, and a physically based method to model the sound of a fabric when rubbed by a stylus. The audio-haptic interface, which renders synchronized auditory and haptic stimuli when the virtual stylus rubs on the surface of a virtual fabric, is developed in VC++6.0 by using OpenGL and the PHANToM GHOST SDK. We asked subjects to test our audio-haptic interface and they were able to differentiate the surface properties of virtual fabrics in the correct order. We show that the virtual fabric is a good modeling of the real counterpart.Item Learning Controls for Blend Shape Based Realistic Facial Animation(The Eurographics Association, 2003) Joshi, Pushkar; Tien, Wen C.; Desbrun, Mathieu; Pighin, Frédéric; D. Breen and M. LinBlend shape animation is the method of choice for keyframe facial animation: a set of blend shapes (key facial expressions) are used to define a linear space of facial expressions. However, in order to capture a significant range of complexity of human expressions, blend shapes need to be segmented into smaller regions where key idiosyncracies of the face being animated are present. Performing this segmentation by hand requires skill and a lot of time. In this paper, we propose an automatic, physically-motivated segmentation that learns the controls and parameters directly from the set of blend shapes. We show the usefulness and efficiency of this technique for both, motion-capture animation and keyframing. We also provide a rendering algorithm to enhance the visual realism of a blend shape model.Item Interactive Physically Based Solid Dynamics(The Eurographics Association, 2003) Hauth, M.; Groß, J.; Straßer, W.; D. Breen and M. LinThe interactive simulation of deformable solids has become a major working area in Computer Graphics. We present a sophisticated material law, better suited for dynamical computations than the standard approaches. As an important example, it is employed to reproduce measured material data from biological soft tissue. We embed it into a state-of-the-art finite element setting employing an adaptive basis. For time integration the use of an explicit stabilized Runge-Kutta method is proposed.Item Interactive Control of Component-based Morphing(The Eurographics Association, 2003) Zhao, Yonghong; Ong, Hong-Yang; Tan, Tiow-Seng; Xiao, Yongguan; D. Breen and M. LinThis paper presents an interactive morphing framework to empower users to conveniently and effectively control the whole morphing process. Although research on mesh morphing has reached a state where most computational problems have been solved in general, the novelty of our framework lies in the integration of global-level and local-level user control through the use of components, and the incorporation of deduction and assistance in user interaction. Given two polygonal meshes, users can choose to specify their requirements either at the global level over components or at the local level within components, whichever is more intuitive. Based on user specifications, the framework proposes several techniques to deduce implied correspondences and add assumed correspondences at both levels. The framework also supports multi-level interpolation control users can operate on a component as a whole or on its individual vertices to specify trajectories. On the whole, in the multi-level componentbased framework, users can choose to specify any number of requirements at each level and the system can complete all other tasks to produce final morphs. Therefore, user control is greatly enhanced and even an amateur can use it to design morphing with ease.Item Simulation of Clothing with Folds and Wrinkles(The Eurographics Association, 2003) Bridson, R.; Marino, S.; Fedkiw, R.; D. Breen and M. LinClothing is a fundamental part of a character's persona, a key storytelling tool used to convey an intended impression to the audience. Draping, folding, wrinkling, stretching, etc. all convey meaning, and thus each is carefully controlled when filming live actors. When making films with computer simulated cloth, these subtle but important elements must be captured. In this paper we present several methods essential to matching the behavior and look of clothing worn by digital stand-ins to their real world counterparts. Novel contributions include a mixed explicit/ implicit time integration scheme, a physically correct bending model with (potentially) nonzero rest angles for pre-shaping wrinkles, an interface forecasting technique that promotes the development of detail in contact regions, a post-processing method for treating cloth-character collisions that preserves folds and wrinkles, and a dynamic constraint mechanism that helps to control large scale folding. The common goal of all these techniques is to produce a cloth simulation with many folds and wrinkles improving the realism.Item Constrained Animation of Flocks(The Eurographics Association, 2003) Anderson, Matt; McDaniel, Eric; Chenney, Stephen; D. Breen and M. LinGroup behaviors are widely used in animation, yet it is difficult to impose hard constraints on their behavior. We describe a new technique for the generation of constrained group animations that improves on existing approaches in two ways: the agents in our simulations meet exact constraints at specific times, and our simulations retain the global properties present in unconstrained motion. Users can position constraints on agents' positions at any time in the animation, or constrain the entire group to meet center of mass or shape constraints. Animations are generated in a two stage process. The first step finds an initial set of trajectories that exactly meet the constraints, but which may violate the behavior rules. The second stage samples new animations that maintain the constraints while improving the motion with respect to the underlying behavioral model. We present a range of animations created with our system.Item Flexible Automatic Motion Blending with Registration Curves(The Eurographics Association, 2003) Kovar, Lucas; Gleicher, Michael; D. Breen and M. LinMany motion editing algorithms, including transitioning and multitarget interpolation, can be represented as instances of a more general operation called motion blending. We introduce a novel data structure called a registration curve that expands the class of motions that can be successfully blended without manual input. Registration curves achieve this by automatically determining relationships involving the timing, local coordinate frame, and constraints of the input motions. We show how registration curves improve upon existing automatic blending methods and demonstrate their use in common blending operations.Item Mapping optical motion capture data to skeletal motion using a physical model(The Eurographics Association, 2003) Zordan, Victor B.; Horst, Nicholas C. Van Der; D. Breen and M. LinMotion capture has become a premiere technique for animation of humanlike characters. To facilitate its use, researchers have focused on the manipulation of data for retargeting, editing, combining, and reusing motion capture libraries. In many of these efforts joint angle plus root trajectories are used as input, although this format requires an inherent mapping from the raw data recorded by many popular motion capture set-ups. In this paper, we propose a novel solution to this mapping problem from 3D marker position data recorded by optical motion capture systems to joint trajectories for a fixed limb-length skeleton using a forward dynamic model. To accomplish the mapping, we attach virtual springs to marker positions located on the appropriate landmarks of a physical simulation and apply resistive torques to the skeleton's joints using a simple controller. For the motion capture samples, joint-angle postures are resolved from the simulation's equilibrium state, based on the internal torques and external forces. Additional constraints, such as foot plants and hand holds, may also be treated as addition forces applied to the system and are a trivial and natural extension to the proposed technique. We present results for our approach as applied to several motion-captured behaviors.Item A Scenario Language to orchestrate Virtual World Evolution(The Eurographics Association, 2003) Devillers, Frédéric; Donikian, Stéphane; D. Breen and M. LinBehavioural animation techniques provide autonomous characters with the ability to react credibly in interactive simulations. The direction of these autonomous agents is inherently complex. Typically, simulations evolve according to reactive and cognitive behaviours of autonomous agents. The free flow of actions makes it difficult to precisely control the happening of desired events. In this paper, we propose a scenario language designed to support direction of semi-autonomous characters. This language offers temporal management and character communication tools. It also allows parallelism between scenarios, and a form of competition for the reservation of characters. Seen from the computing angle, this language is generic: in other words, it doesn't make assumptions about the nature of the simulation. Lastly, this language allows a programmer to build scenarios in a variety of different styles ranging from highly directed cinema-like scripts to scenarios which will momentary finely tune free streams of actions.Item Blowing in the Wind(The Eurographics Association, 2003) Wei, Xiaoming; Zhao, Ye; Fan, Zhe; Li, Wei; Yoakum-Stover, Suzanne; Kaufman, Arie; D. Breen and M. LinWe present an approach for simulating the natural dynamics that emerge from the coupling of a flow field to lightweight, mildly deformable objects immersed within it. We model the flow field using a Lattice Boltzmann Model (LBM) extended with a subgrid model and accelerate the computation on commodity graphics hardware to achieve real-time simulations. We demonstrate our approach using soap bubbles and a feather blown by wind fields, yet our approach is general enough to apply to other light-weight objects. The soap bubbles illustrate Fresnel reflection, reveal the dynamics of the unseen flow field in which they travel, and display spherical harmonics in their undulations. The free feather floats and flutters in response to lift and drag forces. Our single bubble simulation allows the user to directly interact with the wind field and thereby influence the dynamics in real time.Item Trackable Surfaces(The Eurographics Association, 2003) Guskov, Igor; Klibanov, Sergey; Bryant, Benjamin; D. Breen and M. LinWe introduce a novel approach for real-time non-rigid surface acquisition based on tracking quad marked surfaces. The color-identified quad arrangement allows for automatic feature correspondence, tracking initialization, and simplifies 3D reconstruction. We present a prototype implementation of our approach together with several examples of acquired surface motions.Item Construction and Animation of Anatomically Based Human Hand Models(The Eurographics Association, 2003) Albrecht, Irene; Haber, Jörg; Seidel, Hans-Peter; D. Breen and M. LinThe human hand is a masterpiece of mechanical complexity, able to perform fine motor manipulations and powerful work alike. Designing an animatable human hand model that features the abilities of the archetype created by Nature requires a great deal of anatomical detail to be modeled. In this paper, we present a human hand model with underlying anatomical structure. Animation of the hand model is controlled by muscle contraction values. We employ a physically based hybrid muscle model to convert these contraction values into movement of skin and bones. Pseudo muscles directly control the rotation of bones based on anatomical data and mechanical laws, while geometric muscles deform the skin tissue using a mass-spring system. Thus, resulting animations automatically exhibit anatomically and physically correct finger movements and skin deformations. In addition, we present a deformation technique to create individual hand models from photographs. A radial basis warping function is set up from the correspondence of feature points and applied to the complete structure of the reference hand model, making the deformed hand model instantly animatable.Item Geometry Videos: A New Representation for 3D Animations(The Eurographics Association, 2003) Briceño, Hector M.; Sander, Pedro V.; McMillan, Leonard; Gortler, Steven; Hoppe, Hugues; D. Breen and M. LinWe present the 'Geometry Video', a new data structure to encode animated meshes. Being able to encode animated meshes in a generic source-independent format allows people to share experiences. Changing the viewpoint allows more interaction than the fixed view supported by 2D video. Geometry videos are based on the 'Geometry Image' mesh representation introduced by Gu et al. 4. Our novel data structure provides a way to treat an animated mesh as a video sequence (i.e., 3D image) and is well suited for network streaming. This representation also offers the possibility of applying and adapting existing mature video processing and compression techniques (such as MPEG encoding) to animated meshes. This paper describes an algorithm to generate geometry videos from animated meshes. The main insight of this paper, is that Geometry Videos re-sample and re-organize the geometry information, in such a way, that it becomes very compressible. They provide a unified and intuitive method for level-of-detail control, both in terms of mesh resolution (by scaling the two spatial dimensions) and of frame rate (by scaling the temporal dimension). Geometry Videos have a very uniform and regular structure. Their resource and computational requirements can be calculated exactly, hence making them also suitable for applications requiring level of service guarantees.Item Geometry-Driven Photorealistic Facial Expression Synthesis(The Eurographics Association, 2003) Zhang, Qingshan; Liu, Zicheng; Guo, Baining; Shum, Harry; D. Breen and M. LinExpression mapping (also called performance driven animation) has been a popular method to generate facial animations. One shortcoming of this method is that it does not generate expression details such as the wrinkles due to the skin deformation. In this paper, we provide a solution to this problem. We have developed a geometry-driven facial expression synthesis system. Given the feature point positions (geometry) of a facial expression, our system automatically synthesizes the corresponding expression image which has photorealistic and natural looking expression details. Since the number of feature points required by the synthesis system is in general more than what is available from the performer due to the difficulty of tracking, we have developed a technique to infer the feature point motions from a subset by using an example-based approach. Another application of our system is on expression editing where the user drags the feature points while the system interactively generates facial expressions with skin deformation details.Item Stylizing Motion with Drawings(The Eurographics Association, 2003) Li, Yin; Gleicher, Michael; Xu, Ying-Qing; Shum, Heung-Yeung; D. Breen and M. LinIn this paper, we provide a method that injects the expressive shape deformations common in traditional 2D animation into an otherwise rigid 3D motion captured animation. We allow a traditional animator to modify frames in the rendered animation by redrawing the key features such as silhouette curves. These changes are then integrated into the animation. To perform this integration, we divide the changes into those that can be made by altering the skeletal animation, and those that must be made by altering the character's mesh geometry. To propagate mesh changes into other frames, we introduce a new image warping technique that takes into account the character's 3D structure. The resulting technique provides a system where an animator can inject stylization into 3D animation.Item An Example-Based Approach for Facial Expression Cloning(The Eurographics Association, 2003) Pyun, Hyewon; Kim, Yejin; Chae, Wonseok; Kang, Hyung Woo; Shin, Sung Yong; D. Breen and M. LinIn this paper, we present a novel example-based approach for cloning facial expressions of a source model to a target model while reflecting the characteristic features of the target model in the resulting animation. Our approach comprises three major parts: key-model construction, parameterization, and expression blending. We first present an effective scheme for constructing key-models. Given a set of source example key-models and their corresponding target key-models created by animators, we parameterize the target key-models using the source key-models and predefine the weight functions for the parameterized target key-models based on radial basis functions. In runtime, given an input model with some facial expression, we compute the parameter vector of the corresponding output model, to evaluate the weight values for the target key-models and obtain the output model by blending the target key-models with those weights. The resulting animation preserves the facial expressions of the input model as well as the characteristic features of the target model specified by animators. Our method is not only simple and accurate but also fast enough for various real-time applications such as video games or internet broadcasting.