Search Results

Now showing 1 - 10 of 13
  • Item
    Visibility Transition Planning for Dynamic Camera Control
    (ACM SIGGRAPH / Eurographics Association, 2009) Oskam, Thomas; Sumner, Robert W.; Thuerey, Nils; Gross, Markus; Eitan Grinspun and Jessica Hodgins
    We present a real-time camera control system that uses a global planning algorithm to compute large, occlusion free camera paths through complex environments. The algorithm incorporates the visibility of a focus point into the search strategy, so that a path is chosen along which the focus target will be in view. The efficiency of our algorithm comes from a visibility-aware roadmap data structure that permits the precomputation of a coarse representation of all collision-free paths through an environment, together with an estimate of the pair-wise visibility between all portions of the scene. Our runtime system executes a path planning algorithm using the precomputed roadmap values to find a coarse path, and then refines the path using a sequence of occlusion maps computed on-the-fly. An iterative smoothing algorithm, together with a physically-based camera model, ensures that the path followed by the camera is smooth in both space and time. Our global planning strategy on the visibility-aware roadmap enables large-scale camera transitions as well as a local third-person camera module that follows a player and avoids obstructed viewpoints. The data structure itself adapts at run-time to dynamic occluders that move in an environment. We demonstrate these capabilities in several realistic game environments.
  • Item
    Efficient and Robust Annotation of Motion Capture Data
    (ACM SIGGRAPH / Eurographics Association, 2009) Müller, Meinard; Baak, Andreas; Seidel, Hans-Peter; Eitan Grinspun and Jessica Hodgins
    In view of increasing collections of available 3D motion capture (mocap) data, the task of automatically annotating large sets of unstructured motion data is gaining in importance. In this paper, we present an efficient approach to label mocap data according to a given set of motion categories or classes, each specified by a suitable set of positive example motions. For each class, we derive a motion template that captures the consistent and variable aspects of a motion class in an explicit matrix representation. We then present a novel annotation procedure, where the unknown motion data is segmented and annotated by locally comparing it with the available motion templates. This procedure is supported by an efficient keyframe-based preprocessing step, which also significantly improves the annotation quality by eliminating false positive matches. As a further contribution, we introduce a genetic learning algorithm to automatically learn the necessary keyframes from the given example motions. For evaluation, we report on various experiments conducted on two freely available sets of motion capture data (CMU and HDM05).
  • Item
    Experiment-based Modeling, Simulation and Validation of Interactions between VirtualWalkers
    (ACM SIGGRAPH / Eurographics Association, 2009) Pettré, Julien; Ondrej, Jan; Olivier, Anne-Hélène; Cretual, Armel; Donikian, Stéphane; Eitan Grinspun and Jessica Hodgins
    An interaction occurs between two humans when they walk with converging trajectories. They need to adapt their motion in order to avoid and cross one another at respectful distance. This paper presents a model for solving interactions between virtual humans. The proposed model is elaborated from experimental interactions data. We first focus our study on the pair-interaction case. In a second stage, we extend our approach to the multiple interactions case. Our experimental data allow us to state the conditions for interactions to occur between walkers, as well as each one's role during interaction and the strategies walkers set to adapt their motion. The low number of parameters of the proposed model enables its automatic calibration from available experimental data. We validate our approach by comparing simulated trajectories with real ones. We also provide comparison with previous solutions. We finally discuss the ability of our model to be extended to complex situations.
  • Item
    Anisotropic Friction for Deformable Surfaces and Solids
    (ACM SIGGRAPH / Eurographics Association, 2009) Pabst, Simon; Thomaszewski, Bernhard; Straßer, Wolfgang; Eitan Grinspun and Jessica Hodgins
    This paper presents a method for simulating anisotropic friction for deforming surfaces and solids. Frictional contact is a complex phenomenon that fuels research in mechanical engineering, computational contact mechanics, composite material design and rigid body dynamics, to name just a few. Many real-world materials have anisotropic surface properties. As an example, most textiles exhibit direction-dependent frictional behavior, but despite its tremendous impact on visual appearance, only simple isotropic models have been considered for cloth and solid simulation so far. In this work, we propose a simple, application-oriented but physically sound model that extends existing methods to account for anisotropic friction. The sliding properties of surfaces are encoded in friction tensors, which allows us to model frictional resistance freely along arbitrary directions. We also consider heterogeneous and asymmetric surface roughness and demonstrate the increased simulation quality on a number of two- and three-dimensional examples. Our method is computationally efficient and can easily be integrated into existing systems.
  • Item
    Guiding of Smoke Animations Through Variational Coupling of Simulations at Different Resolutions
    (ACM SIGGRAPH / Eurographics Association, 2009) Nielsen, Michael B.; Christensen, Brian B.; Zafar, Nafees Bin; Roble, Doug; Museth, Ken; Eitan Grinspun and Jessica Hodgins
    We propose a novel approach to guiding of Eulerian-based smoke animations through coupling of simulations at different grid resolutions. Specifically we present a variational formulation that allows smoke animations to adopt the low-frequency features from a lower resolution simulation (or non-physical synthesis), while simultaneously developing higher frequencies. The overall motivation for this work is to address the fact that art-direction of smoke animations is notoriously tedious. Particularly a change in grid resolution can result in dramatic changes in the behavior of smoke animations, and existing methods for guiding either significantly lack high frequency detail or may result in undesired features developing over time. Provided that the bulk movement can be represented satisfactorily at low resolution, our technique effectively allows artists to prototype simulations at low resolution (where computations are fast) and subsequently add extra details without altering the overall look and feel . Our implementation is based on a customized multi-grid solver with memory-efficient data structures.
  • Item
    A Point-based Method for Animating Incompressible Flow
    (ACM SIGGRAPH / Eurographics Association, 2009) Sin, Funshing; Bargteil, Adam W.; Hodgins, Jessica K.; Eitan Grinspun and Jessica Hodgins
    In this paper, we present a point-based method for animating incompressible flow. The advection term is handled by moving the sample points through the flow in a Lagrangian fashion. However, unlike most previous approaches, the pressure term is handled by performing a projection onto a divergence-free field. To perform the pressure projection, we compute a Voronoi diagram with the sample points as input. Borrowing from Finite Volume Methods, we then invoke the divergence theorem and ensure that each Voronoi cell is divergence free. To handle complex boundary conditions, Voronoi cells are clipped against obstacle boundaries and free surfaces. The method is stable, flexible and combines many of the desirable features of point-based and grid-based methods. We demonstrate our approach on several examples of splashing and streaming liquid and swirling smoke.
  • Item
    A Point-Based Method for Animating Elastoplastic Solids
    (ACM SIGGRAPH / Eurographics Association, 2009) Gerszewski, Dan; Bhattacharya, Haimasree; Bargteil, Adam W.; Eitan Grinspun and Jessica Hodgins
    In this paper we describe a point-based approach for animating elastoplastic materials. Our primary contribution is a simple method for computing the deformation gradient for each particle in the simulation. The deformation gradient is computed for each particle by finding the affine transformation that best approximates the motion of neighboring particles over a single timestep. These transformations are then composed to compute the total deformation gradient that describes the deformation around a particle over the course of the simulation. Given the deformation gradient we can apply arbitrary constitutive models and compute the resulting elastic forces. Our method has two primary advantages: we do not store or compare to an initial rest configuration and we work directly with the deformation gradient. The first advantage avoids poor numerical conditioning and the second naturally leads to a multiplicative model of deformation appropriate for finite deformations. We demonstrate our approach on a number of examples that exhibit a wide range of material behaviors.
  • Item
    Style Learning and Transferring for Facial Animation Editing
    (ACM SIGGRAPH / Eurographics Association, 2009) Ma, Xiaohan; Le, Binh Huy; Deng, Zhigang; Eitan Grinspun and Jessica Hodgins
    Most of current facial animation editing techniques are frame-based approaches (i.e., manually edit one keyframe every several frames), which is ineffective, time-consuming, and prone to editing inconsistency. In this paper, we present a novel facial editing style learning framework that is able to learn a constraint-based Gaussian Process model from a small number of facial-editing pairs, and then it can be effectively applied to automate the editing of the remaining facial animation frames or transfer editing styles between different animation sequences. Comparing with the state of the art, multiresolution-based mesh sequence editing technique, our approach is more flexible, powerful, and adaptive. Our approach can dramatically reduce the manual efforts required by most of current facial animation editing approaches.
  • Item
    Real-Time Deformation and Fracture in a Game Environment
    (ACM SIGGRAPH / Eurographics Association, 2009) Parker, Eric G.; O'Brien, James F.; Eitan Grinspun and Jessica Hodgins
    This paper describes a simulation system that has been developed to model the deformation and fracture of solid objects in a real-time gaming context. Based around a corotational tetrahedral finite element method, this system has been constructed from components published in the graphics and computational physics literatures. The goal of this paper is to describe how these components can be combined to produce an engine that is robust to unpredictable user interactions, fast enough to model reasonable scenarios at real-time speeds, suitable for use in the design of a game level, and with appropriate controls allowing content creators to match artistic direction. Details concerning parallel implementation, solver design, rendering method, and other aspects of the simulation are elucidated with the intent of providing a guide to others wishing to implement similar systems. Examples from in-game scenes captured on the Xbox 360, PS3, and PC platforms are included.
  • Item
    Exact volume preserving skinning with shape control
    (ACM SIGGRAPH / Eurographics Association, 2009) Rohmer, Damien; Hahmann, Stefanie; Cani, Marie-Paule; Eitan Grinspun and Jessica Hodgins
    In the real world, most objects do not loose volume when they deform: they may for instance compensate a local compression by inflating in the orthogonal direction, or, in the case of a character, preserve volume through specific bulges and folds. This paper presents a novel extension to smooth skinning, which not only offers an exact control of the object volume, but also enables the user to specify the shape of volume-preserving deformations through intuitive 1D profile curves. The method, a geometric post-processing to standard smooth skinning, perfectly fits into the usual production pipeline. It can be used whatever the desired locality of volume correction and does not bring any constraint on the original mesh. Several behaviors mimicking the way rubber-like materials and organic shapes respectively deform can be modeled. An improved algorithm for robustly computing skinning weights is also provided, making the method directly usable on complex characters, even for non-experts.