SCA 06: Eurographics/SIGGRAPH Symposium on Computer Animation
https://diglib.eg.org:443/handle/10.2312/430
ISBN 3-905673-34-72024-03-28T17:28:44ZPractical Animation of Turbulent Splashing Water
https://diglib.eg.org:443/handle/10.2312/SCA.SCA06.335-344
Practical Animation of Turbulent Splashing Water
Kim, Janghee; Cha, Deukhyun; Chang, Byungjoon; Koo, Bonki; Ihm, Insung
Marie-Paule Cani and James O'Brien
Despite recent advances in fluid animation, producing small-scale detail of turbulent water still remains challenging. In this paper, we extend the well-accepted particle level set method in an attempt to integrate the dynamic behavior of splashing water easily into a fluid animation system. Massless marker particles that still escape from the main body of water, in spite of the level set correction, are transformed into water particles to represent subcelllevel features that are hard to capture with a limited grid resolution. These physical particles are then moved in the air through a particle simulation system that, combined with the level set, creates realistic turbulent splashing. In the rendering stage, the particle s physical properties such as mass and velocity are exploited to generate a natural appearance of water droplets and spray. In order to visualize the hybrid water, represented in both level set and water particles, we also extend a Monte Carlo ray tracer so that the particle agglomerates are smoothed, thickened, if necessary, and rendered efficiently. The effectiveness of the presented technique is demonstrated with several examples of pictures and animations.
2006-01-01T00:00:00ZA Texture Synthesis Method for Liquid Animations
https://diglib.eg.org:443/handle/10.2312/SCA.SCA06.345-351
A Texture Synthesis Method for Liquid Animations
Bargteil, Adam W.; Sin, Funshing; Michaels, Jonathan E.; Goktekin, Tolga G.; O'Brien, James F.
Marie-Paule Cani and James O'Brien
In this paper we present a method for synthesizing textures on animated liquid surfaces generated by a physically based fluid simulation system. Rather than advecting texture coordinates on the surface, our algorithm synthesizes a new texture for every frame using an optimization procedure which attempts to match the surface texture to an input sample texture. By synthesizing a new texture for every frame, our method is able to overcome the discontinuities and distortions of an advected parameterization. We achieve temporal coherence by initializing the surface texture with color values advected from the surface at the previous frame and including these colors in the energy function used during optimization.
2006-01-01T00:00:00ZAutomatic Splicing for Hand and Body Animations
https://diglib.eg.org:443/handle/10.2312/SCA.SCA06.309-316
Automatic Splicing for Hand and Body Animations
Majkowska, Anna; Zordan, Victor B.; Faloutsos, Petros
Marie-Paule Cani and James O'Brien
We propose a solution to a new problem in animation research: how to use human motion capture data to create character motion with detailed hand gesticulation without the need for the simultaneous capture of hands and the full-body. Occlusion and a difference in scale make it difficult to capture both the detail of the hand movement and unrestricted full-body motion at the same time. With our method, the two can be captured separately and spliced together seamlessly with little or no user input required. The algorithm relies on a novel distance metric derived from research on gestures and uses a two-pass dynamic time warping algorithm to find correspondence between the hand and full-body motions. In addition, we provide a method for supplying user input, useful to animators who want more control over the integrated animation. We show the power of our technique with a variety of common and highly specialized gesticulation examples.
2006-01-01T00:00:00ZPrecomputed Search Trees: Planning for Interactive Goal-Driven Animation
https://diglib.eg.org:443/handle/10.2312/SCA.SCA06.299-308
Precomputed Search Trees: Planning for Interactive Goal-Driven Animation
Lau, Manfred; Kuffner, James J.
Marie-Paule Cani and James O'Brien
We present a novel approach for interactively synthesizing motions for characters navigating in complex environments. We focus on the runtime efficiency for motion generation, thereby enabling the interactive animation of a large number of characters simultaneously. The key idea is to precompute search trees of motion clips that can be applied to arbitrary environments. Given a navigation goal relative to a current body position, the best available solution paths and motion sequences can be efficiently extracted during runtime through a series of table lookups. For distant start and goal positions, we first use a fast coarse-level planner to generate a rough path of intermediate sub-goals to guide each iteration of the runtime lookup phase. We demonstrate the efficiency of our technique across a range of examples in an interactive application with multiple autonomous characters navigating in dynamic environments. Each character responds in real-time to arbitrary user changes to the environment obstacles or navigation goals. The runtime phase is more than two orders of magnitude faster than existing planning methods or traditional motion synthesis techniques. Our technique is not only useful for autonomous motion generation in games, virtual reality, and interactive simulations, but also for animating massive crowds of characters offline for special effects in movies.
2006-01-01T00:00:00Z