24 results
Search Results
Now showing 1 - 10 of 24
Item Avatar Markup Language(The Eurographics Association, 2002) Kshirsagar, Sumedha; Magnenat-Thalmann, Nadia; Guye-Vuillème, Anthony; Thalmann, Daniel; Kamyab, Kaveh; Mamdani, Ebrahim; S. Mueller and W. StuerzlingerSynchronization of speech, facial expressions and body gestures is one of the most critical problems in realistic avatar animation in virtual environments. In this paper, we address this problem by proposing a new high-level animation language to describe avatar animation. The Avatar Markup Language (AML), based on XML, encapsulates the Text to Speech, Facial Animation and Body Animation in a unified manner with appropriate synchronization. We use low-level animation parameters, defined by the MPEG-4 standard, to demonstrate the use of the AML. However, the AML itself is independent of any low-level parameters as such. AML can be effectively used by intelligent software agents to control their 3D graphical representations in the virtual environments. With the help of the associated tools, AML also facilitates to create and share 3D avatar animations quickly and easily. We also discuss how the language has been developed and used within the SoNG project framework. The tools developed to use AML in a real-time animation system incorporating intelligent agents and 3D avatars are also discussed subsequently.Item Populating Ancient Pompeii with Crowds of Virtual Romans(The Eurographics Association, 2007) Maim, Jonathan; Haegler, Simon; Yersin, Barbara; Mueller, Pascal; Thalmann, Daniel; Gool, Luc Van; D. Arnold and F. Niccolucci and A. ChalmersPompeii was a Roman city, destroyed and completely buried during an eruption of the volcano Mount Vesuvius. We have revived its past by creating a 3D model of its previous appearance and populated it with crowds of Virtual Romans. In this paper, we detail the process, based on archaeological data, to simulate ancient Pompeii life in real time. In a first step, an annotated city model is generated using procedural modelling. These annotations contain semantic data, such as land usage, building age, and window/door labels. In a second phase, the semantics are automatically interpreted to populate the scene and trigger special behaviors in the crowd, depending on the location of the characters. Finally, we describe the system pipeline, which allows for the simulation of thousands of Virtual Romans in real time.Item Planning Collision-Free Reaching Motions for Interactive Object Manipulation and Grasping(Blackwell Publishers, Inc and the Eurographics Association, 2003) Kallmann, Marcelo; Aubel, Amaury; Abaci, Tolga; Thalmann, DanielWe present new techniques that use motion planning algorithms based on probabilistic roadmaps to control 22 degrees of freedom (DOFs) of human-like characters in interactive applications. Our main purpose is the automatic synthesis of collision-free reaching motions for both arms, with automatic column control and leg flexion. Generated motions are collision-free, in equilibrium, and respect articulation range limits. In order to deal with the high (22) dimension of our configuration space, we bias the random distribution of configurations to favor postures most useful for reaching and grasping. In addition, extensions are presented in order to interactively generate object manipulation sequences: a probabilistic inverse kinematics solver for proposing goal postures matching pre-designed grasps; dynamic update of roadmaps when obstacles change position; online planning of object location transfer; and an automatic stepping control to enlarge the character's reachable space. This is, to our knowledge, the first time probabilistic planning techniques are used to automatically generate collision-free reaching motions involving the entire body of a human-like character at interactive frame rates.Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Three-Dimensional Graphics and RealismItem Real-time Shader Rendering for Crowds in Virtual Heritage(The Eurographics Association, 2005) Ciechomski, Pablo de Heras; Schertenleib, Sébastien; Maïm, Jonathan; Maupu, Damien; Thalmann, Daniel; Mark Mudge and Nick Ryan and Roberto ScopignoWe present a method of fully dynamically rendered virtual humans with variety in color, animation and appearance. This is achieved by using vertex and fragment shaders programmed in the OpenGL shading language (GLSL). We then compare our results with a fixed function pipeline based approach. We also show a color variety creation GUI using HSB color space restriction. An improved version of the LOD pipeline for our virtual characters is presented. With these new techniques, we are able to use a full dynamic animation range in the crowd populating the Aphrodisias odeon (which is part of the ERATO project), i.e., a greater repertoire of animations, smooth transitions and more variety and speed. We show how a multi-view of the rendering data can ensure good batching of rendering primitives and comfortable constant time access.Item Course: Modeling Individualities in Groups and Crowds(The Eurographics Association, 2009) Donikian, Stéphane; Magnenat-Thalmann, Nadia; Pettré, Julien; Thalmann, Daniel; K. Museth and D. WeiskopfCrowds are part of our everyday life experience and essential when working with realistic interactive environments. Domains of application for such simulations range from populating artificial cities to entertainment, and virtual reality exposure therapy for crowd phobia. We mainly focus on real-time applications where the visual uniqueness of the characters composing a crowd is paramount. On the one hand, it is required to display several thousands of virtual humans at high frame rates. On the other hand, each character has to be different from all others, and its visual quality highly detailed. Variety in rendering is defined as having different forms or types and is necessary to create believable and reliable crowds in opposition to uniform crowds. For a human crowd, variation can come from the following aspects: gender, age, morphology, head, kind of clothes, color of clothes and behaviors.Item Crowdbrush: Interactive Authoring of Real-time Crowd Scenes(The Eurographics Association, 2004) Ulicny, Branislav; Ciechomski, Pablo de Heras; Thalmann, Daniel; R. Boulic and D. K. PaiRecent advances in computer graphics techniques and increasing power of graphics hardware made it possible to display and animate large crowds in real-time. Most of the research efforts have been directed towards improving rendering or behavior control; the question how to author crowd scenes in an efficient way is usually not addressed. We introduce a novel approach to create complex scenes involving thousands of animated individuals in a simple and intuitive way. By employing a brush metaphor, analogous to the tools used in image manipulation programs, we can distribute, modify and control crowd members in real-time with immediate visual feedback. We define concepts of operators and instance properties that allow to create and manage variety in populations of virtual humans. An efficient technique allowing to render up to several thousands of fully three-dimensional polygonal characters with keyframed animations at interactive framerates is presented. The potential of our approach is demonstrated by authoring a scenario of a virtual audience in a theater and a scenario of a pedestrian crowd in a city.Item An Interdisciplinary VR-architecture for 3D Chatting with Non-verbal Communication(The Eurographics Association, 2011) Gobron, Stephane; Ahn, Junghyun; Silvestre, Quentin; Thalmann, Daniel; Rank, Stefan; Skowron, Marcin; Paltoglou, Georgios; Thelwall, Michael; Sabine Coquillart and Anthony Steed and Greg WelchThe communication between avatar and agent has already been treated from different but specialized perspectives. In contrast, this paper gives a balanced view of every key architectural aspect: from text analysis to computer graphics, the chatting system and the emotional model. Non-verbal communication, such as facial expression, gaze, or head orientation is crucial to simulate realistic behavior, but is still an aspect neglected in the simulation of virtual societies. In response, this paper aims to present the necessary modularity to allow virtual humans (VH) conversation with consistent facial expression -either between two users through their avatars, between an avatar and an agent, or even between an avatar and a Wizard of Oz. We believe such an approach is particularly suitable for the design and implementation of applications involving VHs interaction in virtual worlds. To this end, three key features are needed to design and implement this system entitled 3D-emoChatting. First, a global architecture that combines components from several research fields. Second, a real-time analysis and management of emotions that allows interactive dialogues with non-verbal communication. Third, a model of a virtual emotional mind called emoMind that allows to simulate individual emotional characteristics. To conclude the paper, we briefly present the basic description of a user-test which is beyond the scope of the present paper.Item Virtual Humans: Ten Problems Still Not Completely Solved(Eurographics Association, 2000) Thalmann, DanielDuring the 1980s, the academic establishment paid only scant attention to research on the animation of virtual humans. Today, however, almost every graphics journal, popular magazine, or newspaper devotes some space to Virtual Humans and their applications. But, there are still a lot of problems to generate believable Virtual Humans. The purpose of this paper is to identify ten main problems to solve to create and animate believable Virtual Humans.Item Inhabited Virtual Heritage(Eurographics Association, 2001) Magnenat-Thalmann, Nadia; Chalmers, Alan; Thalmann, DanielTwo techniques depending on the interest – accuracy and precision of the obtained object model shapes, • CAD systems, medical application. – visual realism and speed for animation of the reconstructed models, • internet applications • Virtual Reality applications.Item EG 2005 Tutorial on Mixed Realities in Inhabited Worlds(The Eurographics Association, 2005) Magnenat-Thalmann, Nadia; Thalmann, Daniel; Fua, Pascal; Vexo, Frederic; Kim, HyungSeok; Ming Lin and Celine Loscos1. Outline of the tutorial<br> 1.1 Concepts and State of the Art of mixed realities in inhabited worlds<br> 1.1.1 Mixed Realities in inhabited worlds<br> 1.1.2 Believability and Presence<br> 1.2 Perception, Sensors and Immersive hardware for MR in Inhabited Worlds<br> 1.2.1 Vision Based 3D Tracking and Pose Estimation for MR <br> 1.2.2 Perception and sensors for Virtual Humans<br> 1.2.3 Hardware for mixed reality inhabited virtual world<br> 1.2.4 Emotional and conversational virtual humans<br> 1.3 MR in various applications<br> 1.3.1 Simulating Life in mixed realities Pompei world<br> 1.3.2 Simulating actors and audiences in ancient theaters<br> 1.3.3 MR in STAR, an industrial project<br> 1.3.4 Feeling presence in the treatment of social phobia<br> 2. Syllabus<br> 3. Resume of the presenters<br> 4. Selected Publications<br>
- «
- 1 (current)
- 2
- 3
- »