Rieger, ThomasBraun, Norbert2015-02-162015-02-1620031467-8659https://doi.org/10.1111/1467-8659.t01-2-00713This paper describes the concept and control of a 3d virtual character system with facial expressions and gesturesas a conversational user interface with narrative expressiveness for the hearing impaired. The gestures and facialexpressions are based on morphing techniques. The system allows the generation of sign language and mouthmotion in real time from text at the level of lip reading quality. The concept of Narrative Extended Speech Acts(NESA) is introduced, based on Interactive Storytelling techniques and the concepts of Narrative Conflict andSuspense Progression. We define a choice of annotation tags to be used with NESAs. We use the NESAs to classifyconversation fragments and to enhance computer generated sign language. We note, how the sign language gesturesare generated and show the possibilities for editing sign language gestures. Furthermore, we give details onhow the NESAs are mapped to gestures. We show the possibilities of controlling the virtual character's behaviourand gestures in a human-oriented way and provide an outlook on future work.Categories and Subject Descriptors (according to ACM CCS): 1.3.6 [Computer Graphics]: Methodology and TechniquesNarrative Use of Sign Language by a Virtual Character for the Hearing Impaired10.1111/1467-8659.t01-2-00713651-660