Search Results

Now showing 1 - 10 of 65
  • Item
    A Survey on Reinforcement Learning Methods in Character Animation
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Kwiatkowski, Ariel; Alvarado, Eduardo; Kalogeiton, Vicky; Liu, C. Karen; Pettré, Julien; Panne, Michiel van de; Cani, Marie-Paule; Meneveaux, Daniel; Patanè, Giuseppe
    Reinforcement Learning is an area of Machine Learning focused on how agents can be trained to make sequential decisions, and achieve a particular goal within an arbitrary environment. While learning, they repeatedly take actions based on their observation of the environment, and receive appropriate rewards which define the objective. This experience is then used to progressively improve the policy controlling the agent's behavior, typically represented by a neural network. This trained module can then be reused for similar problems, which makes this approach promising for the animation of autonomous, yet reactive characters in simulators, video games or virtual reality environments. This paper surveys the modern Deep Reinforcement Learning methods and discusses their possible applications in Character Animation, from skeletal control of a single, physically-based character to navigation controllers for individual agents and virtual crowds. It also describes the practical side of training DRL systems, comparing the different frameworks available to build such agents.
  • Item
    Sketch-Based Modeling of Vascular Systems: a First Step Towards Interactive Teaching of Anatomy
    (The Eurographics Association, 2010) Pihuit, Adeline; Cani, Marie-Paule; Palombi, Olivier; Marc Alexa and Ellen Yi-Luen Do
    We present a sketch-based modeling system, inspired from anatomical drawing, which constructs plausible 3D models of branching vessels from a single sketch. The input drawing typically includes non-flat silhouettes and occluded parts. We exploit the sketching conventions used in anatomical drawings to infer depth and curvature from contour and skeleton curves extracted from the sketch. We then model the set of branching vessels as a convolution surface generated by a graph of skeleton curves: while these curves are set to fit the sketch in the front plane, non-uniform B-spline interpolation is used to give them smoothly varying depth values that meet the set of constraints. The final model is displayed using an expressive rendering method that imitates the aspect of chalk drawing. We discuss the future use of this system as a step towards the interactive teaching of anatomy.
  • Item
    Velocity Skinning for Real-time Stylized Skeletal Animation
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Rohmer, Damien; Tarini, Marco; Kalyanasundaram, Niranjan; Moshfeghifar, Faezeh; Cani, Marie-Paule; Zordan, Victor; Mitra, Niloy and Viola, Ivan
    Secondary animation effects are essential for liveliness. We propose a simple, real-time solution for adding them on top of standard skinning, enabling artist-driven stylization of skeletal motion. Our method takes a standard skeleton animation as input, along with a skin mesh and rig weights. It then derives per-vertex deformations from the different linear and angular velocities along the skeletal hierarchy. We highlight two specific applications of this general framework, namely the cartoonlike ''squashy'' and ''floppy'' effects, achieved from specific combinations of velocity terms. As our results show, combining these effects enables to mimic, enhance and stylize physical-looking behaviours within a standard animation pipeline, for arbitrary skinned characters. Interactive on CPU, our method allows for GPU implementation, yielding real-time performances even on large meshes. Animator control is supported through a simple interface toolkit, enabling to refine the desired type and magnitude of deformation at relevant vertices by simply painting weights. The resulting rigged character automatically responds to new skeletal animation, without further input.
  • Item
    Adding Dynamics to Sketch-based Character Animations
    (The Eurographics Association, 2015) Guay, Martin; Ronfard, Rémi; Gleicher, Michael; Cani, Marie-Paule; Ergun Akleman
    Cartoonists and animators often use lines of action to emphasize dynamics in character poses. In this paper, we propose a physically-based model to simulate the line of action's motion, leading to rich motion from simple drawings. Our proposed method is decomposed into three steps. Based on user-provided strokes, we forward simulate 2D elastic motion. To ensure continuity across keyframes, we re-target the forward simulations to the drawn strokes. Finally, we synthesize a 3D character motion matching the dynamic line. The fact that the line can move freely like an elastic band raises new questions about its relationship to the body over time. The line may move faster and leave body parts behind, or the line may slide slowly towards other body parts for support. We conjecture that the artist seeks to maximize the filling of the line (with the character's body)-while respecting basic realism constraints such as balance. Based on these insights, we provide a method that synthesizes 3D character motion, given discontinuously constrained body parts that are specified by the user at key moments.
  • Item
    Quadruped Animation
    (The Eurographics Association, 2008) Skrba, Ljiljana; Reveret, Lionel; Hetroy, Franck; Cani, Marie-Paule; O'Sullivan, Carol; Theoharis Theoharis and Philip Dutre
    Films like Shrek, Madagascar, The Chronicles of Narnia and Charlotte s web all have something in common: realistic quadruped animations. While the animation of animals has been popular for a long time, the technical challenges associated with creating highly realistic, computer generated creatures have been receiving increasing attention recently. The entertainment, education and medical industries have increased the demand for simulation of realistic animals in the computer graphics area. In order to achieve this, several challenges need to be overcome: gathering and processing data that embodies the natural motion of an animal which is made more difficult by the fact that most animals cannot be easily motion-captured; build accurate kinematic models for animals, in particular with adapted animation skeletons; and develop either kinematic or physically-based animation methods, either embedding some a priori knowledge about the way that quadrupeds locomote and/or building on some example of real motion. In this state of the art report, we present an overview of the common techniques used to date for realistic quadruped animation. This includes an outline of the various ways that realistic quadruped motion can be achieved, through video-based acquisition, physics based models, inverse kinematics, or some combination of the above. The research presented represents a cross fertilisation of vision, graphics and interaction methods.
  • Item
    Drawing for Illustration and Annotation in 3D
    (Blackwell Publishers Ltd and the Eurographics Association, 2001) Bourguignon, David; Cani, Marie-Paule; Drettakis, George
    We present a system for sketching in 3D, which strives to preserve the degree of expression, imagination, and simplicity of use achieved by 2D drawing. Our system directly uses user-drawn strokes to infer the sketches representing the same scene from different viewpoints, rather than attempting to reconstruct a 3D model. This is achieved by interpreting strokes as indications of a local surface silhouette or contour. Strokes thus deform and disappear progressively as we move away from the original viewpoint. They may be occluded by objects indicated by other strokes, or, in contrast, be drawn above such objects. The user draws on a plane which can be positioned explicitly or relative to other objects or strokes in the sketch. Our system is interactive, since we use fast algorithms and graphics hardware for rendering. We present applications to education, design, architecture and fashion, where 3D sketches can be used alone or as an annotation of an existing 3D model.
  • Item
    EcoBrush: Interactive Control of Visually Consistent Large-Scale Ecosystems
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Gain, James; Long, Harry; Cordonnier, Guillaume; Cani, Marie-Paule; Loic Barthe and Bedrich Benes
    One challenge in portraying large-scale natural scenes in virtual environments is specifying the attributes of plants, such as species, size and placement, in a way that respects the features of natural ecosystems, while remaining computationally tractable and allowing user design. To address this, we combine ecosystem simulation with a distribution analysis of the resulting plant attributes to create biome-specific databases, indexed by terrain conditions, such as temperature, rainfall, sunlight and slope. For a specific terrain, interpolated entries are drawn from this database and used to interactively synthesize a full ecosystem, while retaining the fidelity of the original simulations. A painting interface supplies users with semantic brushes for locally adjusting ecosystem age, plant density and variability, as well as optionally picking from a palette of precomputed distributions. Since these brushes are keyed to the underlying terrain properties a balance between user control and real-world consistency is maintained. Our system can be be used to interactively design ecosystems up to 5x5 km2 in extent, or to automatically generate even larger ecosystems in a fraction of the time of a full simulation, while demonstrating known properties from plant ecology such as succession, self-thinning, and underbrush, across a variety of biomes.
  • Item
    PointCloudSlicer: Gesture-based Segmentation of Point Clouds
    (The Eurographics Association, 2023) Gowtham, Hari Hara; Parakkat, Amal Dev; Cani, Marie-Paule; Babaei, Vahid; Skouras, Melina
    Segmentation is a fundamental problem in point-cloud processing, addressing points classification into consistent regions, the criteria for consistency being based on the application. In this paper, we introduce a simple, interactive framework enabling the user to quickly segment a point cloud in a few cutting gestures in a perceptually consistent way. As the user perceives the limit of a shape part, they draw a simple separation stroke over the current 2D view. The point cloud is then segmented without needing any intermediate meshing step. Technically, we find an optimal, perceptually consistent cutting plane constrained by user stroke and use it for segmentation while automatically restricting the extent of the cut to the closest shape part from the current viewpoint. This enables users to effortlessly segment complex point clouds from an arbitrary viewpoint with the possibility of handling self-occlusions.
  • Item
    Large Scale Terrain Generation from Tectonic Uplift and Fluvial Erosion
    (The Eurographics Association and John Wiley & Sons Ltd., 2016) Cordonnier, Guillaume; Braun, Jean; Cani, Marie-Paule; Benes, Bedrich; Galin, Éric; Peytavie, Adrien; Guérin, Éric; Joaquim Jorge and Ming Lin
    At large scale, landscapes result from the combination of two major processes: tectonics which generate the main relief through crust uplift, and weather which accounts for erosion. This paper presents the first method in computer graphics that combines uplift and hydraulic erosion to generate visually plausible terrains. Given a user-painted uplift map, we generate a stream graph over the entire domain embedding elevation information and stream flow. Our approach relies on the stream power equation introduced in geology for hydraulic erosion. By combining crust uplift and stream power erosion we generate large realistic terrains at a low computational cost. Finally, we convert this graph into a digital elevation model by blending landform feature kernels whose parameters are derived from the information in the graph. Our method gives high-level control over the large scale dendritic structures of the resulting river networks, watersheds, and mountains ridges.
  • Item
    Matisse: Painting 2D regions for Modeling Free-Form Shapes
    (The Eurographics Association, 2008) Bernhardt, Adrien; Pihuit, Adeline; Cani, Marie-Paule; Barthe, Loic; Christine Alvarado and Marie-Paule Cani
    This paper presents Matisse, an interactive modeling system aimed at providing the public with a very easy way to design free-form 3D shapes. The user progressively creates a model by painting 2D regions of arbitrary topology while freely changing the view-point and zoom factor. Each region is converted into a 3D shape, using a variant of implicit modeling that fits convolution surfaces to regions with no need of any optimization step. We use intuitive, automatic ways of inferring the thickness and position in depth of each implicit primitive, enabling the user to concentrate only on shape design. When he or she paints partly on top of an existing primitive, the shapes are blended in a local region around the intersection, avoiding some of the well known unwanted blending artifacts of implicit surfaces. The locality of the blend depends on the size of smallest feature, enabling the user to enhance large, smooth primitives with smaller details without blurring the latter away. As the results show, our system enables any unprepared user to create 3D geometry in a very intuitive way.