20 results
Search Results
Now showing 1 - 10 of 20
Item Visibility Transition Planning for Dynamic Camera Control(ACM SIGGRAPH / Eurographics Association, 2009) Oskam, Thomas; Sumner, Robert W.; Thuerey, Nils; Gross, Markus; Eitan Grinspun and Jessica HodginsWe present a real-time camera control system that uses a global planning algorithm to compute large, occlusion free camera paths through complex environments. The algorithm incorporates the visibility of a focus point into the search strategy, so that a path is chosen along which the focus target will be in view. The efficiency of our algorithm comes from a visibility-aware roadmap data structure that permits the precomputation of a coarse representation of all collision-free paths through an environment, together with an estimate of the pair-wise visibility between all portions of the scene. Our runtime system executes a path planning algorithm using the precomputed roadmap values to find a coarse path, and then refines the path using a sequence of occlusion maps computed on-the-fly. An iterative smoothing algorithm, together with a physically-based camera model, ensures that the path followed by the camera is smooth in both space and time. Our global planning strategy on the visibility-aware roadmap enables large-scale camera transitions as well as a local third-person camera module that follows a player and avoids obstructed viewpoints. The data structure itself adapts at run-time to dynamic occluders that move in an environment. We demonstrate these capabilities in several realistic game environments.Item Efficient and Robust Annotation of Motion Capture Data(ACM SIGGRAPH / Eurographics Association, 2009) Müller, Meinard; Baak, Andreas; Seidel, Hans-Peter; Eitan Grinspun and Jessica HodginsIn view of increasing collections of available 3D motion capture (mocap) data, the task of automatically annotating large sets of unstructured motion data is gaining in importance. In this paper, we present an efficient approach to label mocap data according to a given set of motion categories or classes, each specified by a suitable set of positive example motions. For each class, we derive a motion template that captures the consistent and variable aspects of a motion class in an explicit matrix representation. We then present a novel annotation procedure, where the unknown motion data is segmented and annotated by locally comparing it with the available motion templates. This procedure is supported by an efficient keyframe-based preprocessing step, which also significantly improves the annotation quality by eliminating false positive matches. As a further contribution, we introduce a genetic learning algorithm to automatically learn the necessary keyframes from the given example motions. For evaluation, we report on various experiments conducted on two freely available sets of motion capture data (CMU and HDM05).Item Use and Re-use of Facial Motion CaptureData(The Eurographics Association, 2003) Lorenzo, M.S.; Edge, J.D.; King, S.A.; Maddock, S.; Peter Hall and Philip WillisMotion capture (mocap) data is commonly used to recreate complex human motions in computer graphics. Markers are placed on an actor, and the captured movement of these markers allows us to animate computer-generated characters. Technologies have been introduced which allow this technique to be used not only to retrieve rigid body transformations, but also soft body motion such as the facial movement of an actor. The inherent difficulties of working with facial mocap lies in the application of a discrete sampling of surface points to animate a fine discontinuous mesh. Furthermore, in the general case, where the morphology of the actor's face does not coincide with that of the model we wish to animate, some form of retargetting must be applied. In this paper we discuss methods to animate face meshes from mocap data with minimal user intervention using a surface-oriented deformation paradigm.Item Experiment-based Modeling, Simulation and Validation of Interactions between VirtualWalkers(ACM SIGGRAPH / Eurographics Association, 2009) Pettré, Julien; Ondrej, Jan; Olivier, Anne-Hélène; Cretual, Armel; Donikian, Stéphane; Eitan Grinspun and Jessica HodginsAn interaction occurs between two humans when they walk with converging trajectories. They need to adapt their motion in order to avoid and cross one another at respectful distance. This paper presents a model for solving interactions between virtual humans. The proposed model is elaborated from experimental interactions data. We first focus our study on the pair-interaction case. In a second stage, we extend our approach to the multiple interactions case. Our experimental data allow us to state the conditions for interactions to occur between walkers, as well as each one's role during interaction and the strategies walkers set to adapt their motion. The low number of parameters of the proposed model enables its automatic calibration from available experimental data. We validate our approach by comparing simulated trajectories with real ones. We also provide comparison with previous solutions. We finally discuss the ability of our model to be extended to complex situations.Item Hardware-based Simulation and Collision Detection for Large Particle Systems(The Eurographics Association, 2004) Kolb, A.; Latta, L.; Rezk-Salama, C.; Tomas Akenine-Moeller and Michael McCoolParticle systems have long been recognized as an essential building block for detail-rich and lively visual environments. Current implementations can handle up to 10,000 particles in real-time simulations and are mostly limited by the transfer of particle data from the main processor to the graphics hardware (GPU) for rendering. This paper introduces a full GPU implementation using fragment shaders of both the simulation and rendering of a dynamically-growing particle system. Such an implementation can render up to 1 million particles in real-time on recent hardware. The massively parallel simulation handles collision detection and reaction of particles with objects for arbitrary shape. The collision detection is based on depth maps that represent the outer shape of an object. The depth maps store distance values and normal vectors for collision reaction. Using a special texturebased indexing technique to represent normal vectors, standard 8-bit textures can be used to describe the complete depth map data. Alternately, several depth maps can be stored in one floating point texture. In addition, a GPU-based parallel sorting algorithm is introduced that can be used to perform a depth sorting of the particles for correct alpha blending.Item Anisotropic Friction for Deformable Surfaces and Solids(ACM SIGGRAPH / Eurographics Association, 2009) Pabst, Simon; Thomaszewski, Bernhard; Straßer, Wolfgang; Eitan Grinspun and Jessica HodginsThis paper presents a method for simulating anisotropic friction for deforming surfaces and solids. Frictional contact is a complex phenomenon that fuels research in mechanical engineering, computational contact mechanics, composite material design and rigid body dynamics, to name just a few. Many real-world materials have anisotropic surface properties. As an example, most textiles exhibit direction-dependent frictional behavior, but despite its tremendous impact on visual appearance, only simple isotropic models have been considered for cloth and solid simulation so far. In this work, we propose a simple, application-oriented but physically sound model that extends existing methods to account for anisotropic friction. The sliding properties of surfaces are encoded in friction tensors, which allows us to model frictional resistance freely along arbitrary directions. We also consider heterogeneous and asymmetric surface roughness and demonstrate the increased simulation quality on a number of two- and three-dimensional examples. Our method is computationally efficient and can easily be integrated into existing systems.Item Realtime Ray Tracing of Dynamic Scenes on an FPGA Chip(The Eurographics Association, 2004) Schmittler, Jörg; Woop, Sven; Wagner, Daniel; Paul, Wolfgang J.; Slusallek, Philipp; Tomas Akenine-Moeller and Michael McCoolRealtime ray tracing has recently established itself as a possible alternative to the current rasterization approach for interactive 3D graphics. However, the performance of existing software implementations is still severely limited by today's CPUs, requiring many CPUs for achieving realtime performance. In this paper we present a prototype implementation of the full ray tracing pipeline on a single FPGA chip. Running at only 90 MHz it achieves realtime frame rates of 20 to 60 frames per second over a wide range of 3D scenes and includes support for texturing, multiple light sources, and multiple levels of reflection or transparency. A particular interesting feature of the design is the re-use of the transformation unit necessary for supporting dynamic scenes also for other tasks, including efficient ray-triangle intersection as well as shading computations. Despite the additional support for dynamic scenes this approach reduces the overall hardware cost by 68 %. We evaluate the design and its implementation across a wide set of example scenes and demonstrate the benefits of dedicated realtime ray tracing hardware.Item Guiding of Smoke Animations Through Variational Coupling of Simulations at Different Resolutions(ACM SIGGRAPH / Eurographics Association, 2009) Nielsen, Michael B.; Christensen, Brian B.; Zafar, Nafees Bin; Roble, Doug; Museth, Ken; Eitan Grinspun and Jessica HodginsWe propose a novel approach to guiding of Eulerian-based smoke animations through coupling of simulations at different grid resolutions. Specifically we present a variational formulation that allows smoke animations to adopt the low-frequency features from a lower resolution simulation (or non-physical synthesis), while simultaneously developing higher frequencies. The overall motivation for this work is to address the fact that art-direction of smoke animations is notoriously tedious. Particularly a change in grid resolution can result in dramatic changes in the behavior of smoke animations, and existing methods for guiding either significantly lack high frequency detail or may result in undesired features developing over time. Provided that the bulk movement can be represented satisfactorily at low resolution, our technique effectively allows artists to prototype simulations at low resolution (where computations are fast) and subsequently add extra details without altering the overall look and feel . Our implementation is based on a customized multi-grid solver with memory-efficient data structures.Item Cartoon-Style Rendering of Motion from Video(The Eurographics Association, 2003) Collomosse, J.P.; Hall, P.M.; Peter Hall and Philip WillisThe contribution of this paper is a novel non-photorealistic rendering (NPR) system capable of rendering motion within a video sequence in artistic styles. A variety of cartoon-style motion cues may be inserted into a video sequence, including augmentation cues (such as streak lines, ghosting, or blurring) and deformation cues (such as squash and stretch or drag effects). Users may select from the gamut of available styles by setting parameters which in uence the placement and appearance of motion cues. Our system draws upon techniques from both the vision and the graphics communities to analyse and render motion and is entirely automatic, aside from minimal user interaction to bootstrap a feature tracker. We demonstrate successful application of our system to a variety of subjects with complexities ranging from simple oscillatory to articulated motion, under both static and moving camera conditions with occlusion present. We conclude with a critical appraisal of the system and discuss directions for future work.Item A Point-based Method for Animating Incompressible Flow(ACM SIGGRAPH / Eurographics Association, 2009) Sin, Funshing; Bargteil, Adam W.; Hodgins, Jessica K.; Eitan Grinspun and Jessica HodginsIn this paper, we present a point-based method for animating incompressible flow. The advection term is handled by moving the sample points through the flow in a Lagrangian fashion. However, unlike most previous approaches, the pressure term is handled by performing a projection onto a divergence-free field. To perform the pressure projection, we compute a Voronoi diagram with the sample points as input. Borrowing from Finite Volume Methods, we then invoke the divergence theorem and ensure that each Voronoi cell is divergence free. To handle complex boundary conditions, Voronoi cells are clipped against obstacle boundaries and free surfaces. The method is stable, flexible and combines many of the desirable features of point-based and grid-based methods. We demonstrate our approach on several examples of splashing and streaming liquid and swirling smoke.