2 results
Search Results
Now showing 1 - 2 of 2
Item An Interdisciplinary VR-architecture for 3D Chatting with Non-verbal Communication(The Eurographics Association, 2011) Gobron, Stephane; Ahn, Junghyun; Silvestre, Quentin; Thalmann, Daniel; Rank, Stefan; Skowron, Marcin; Paltoglou, Georgios; Thelwall, Michael; Sabine Coquillart and Anthony Steed and Greg WelchThe communication between avatar and agent has already been treated from different but specialized perspectives. In contrast, this paper gives a balanced view of every key architectural aspect: from text analysis to computer graphics, the chatting system and the emotional model. Non-verbal communication, such as facial expression, gaze, or head orientation is crucial to simulate realistic behavior, but is still an aspect neglected in the simulation of virtual societies. In response, this paper aims to present the necessary modularity to allow virtual humans (VH) conversation with consistent facial expression -either between two users through their avatars, between an avatar and an agent, or even between an avatar and a Wizard of Oz. We believe such an approach is particularly suitable for the design and implementation of applications involving VHs interaction in virtual worlds. To this end, three key features are needed to design and implement this system entitled 3D-emoChatting. First, a global architecture that combines components from several research fields. Second, a real-time analysis and management of emotions that allows interactive dialogues with non-verbal communication. Third, a model of a virtual emotional mind called emoMind that allows to simulate individual emotional characteristics. To conclude the paper, we briefly present the basic description of a user-test which is beyond the scope of the present paper.Item Believable Virtual Characters in Human-Computer Dialogs(The Eurographics Association, 2011) Jung, Yvonne; Kuijper, Arjan; Fellner, Dieter W.; Kipp, Michael; Miksatko, Jan; Gratch, Jonathan; Thalmann, Daniel; N. John and B. WyvillFor many application areas, where a task is most naturally represented by talking or where standard input devices are difficult to use or not available at all, virtual characters can be well suited as an intuitive man-machineinterface due to their inherent ability to simulate verbal as well as nonverbal communicative behavior. This type of interface is made possible with the help of multimodal dialog systems, which extend common speech dialog systems with additional modalities just like in human-human interaction. Multimodal dialog systems consist at least of an auditive and graphical component, and communication is based on speech and nonverbal communication alike. However, employing virtual characters as personal and believable dialog partners in multimodal dialogs entails several challenges, because this requires not only a reliable and consistent motion and dialog behavior but also regarding nonverbal communication and affective components. Besides modeling the mind and creating intelligent communication behavior on the encoding side, which is an active field of research in artificial intelligence, the visual representation of a character including its perceivable behavior, from a decoding perspective, such as facial expressions and gestures, belongs to the domain of computer graphics and likewise implicates many open issues concerning natural communication. Therefore, in this report we give a comprehensive overview how to go from communication models to actual animation and rendering.