Search Results

Now showing 1 - 5 of 5
  • Item
    Sketch-Based Posing of 3D Faces for Facial Animation
    (The Eurographics Association, 2010) Gunnarsson, Orn; Maddock, Steve; John Collomosse and Ian Grimstead
    This paper presents a novel approach to creating 3D facial animation using a sketch-based interface where the animation is generated by interpolating a sequence of sketched key poses. The user does not need any knowledge of the underlying mechanism used to create different expressions or facial poses, and no animation controls or parameters are directly manipulated. Instead, the user sketches the desired shape of a facial feature and the system reconstructs a 3D feature which fits the sketched stroke. This is achieved using a maximum likelihood framework where a statistical model in conjunction with Hidden Markov Models handles sketch detection, and a hierarchical statistical mapping approach reconstructs a posed 3D mesh from a low-dimensional representation.
  • Item
    Evaluation of A Viseme-Driven Talking Head
    (The Eurographics Association, 2010) Dey, Priya; Maddock, Steve; Nicolson, Rod; John Collomosse and Ian Grimstead
    This paper introduces a three-dimensional virtual head for use in speech tutoring applications. The system achieves audiovisual speech synthesis using viseme-driven animation and a coarticulation model, to automatically generate speech from text. The talking head was evaluated using a modified rhyme test for intelligibility. The audiovisual speech animation was found to give higher intelligibility of isolated words than acoustic speech alone.
  • Item
    Audio-Visual Animation of Urban Space
    (The Eurographics Association, 2010) Richmond, Paul; Smyrnova, Yuliya; Maddock, Steve; Kang, Jian; John Collomosse and Ian Grimstead
    We present a technique for simulating accurate physically modelled acoustics within an outdoor urban environment and a tool that presents the acoustics alongside a visually rendered counterpart. Acoustic modelling is achieved by using a mixture of simulating ray-traced specular sound wave reflections and applying radiosity to simulate diffuse reflections. Sound rendering is applied to the energy response of the acoustic modelling stage and is used to produce a number of binaural samples for playback with headphones. The visual tool which has been created unites the acoustic renderings with an accurate 3D representation of the virtual environment. As part of this tool an interpolation technique has been implemented allowing a user controlled walkthrough of the simulated environment. This produces better sound localisation effects than listening from a set number of static locations.
  • Item
    Comparison of Different Types of Visemes using a Constraint-based Coarticulation Model
    (The Eurographics Association, 2010) Lazalde, Oscar M. Martinez; Maddock, Steve; John Collomosse and Ian Grimstead
    A common approach to producing visual speech is to interpolate the parameters describing a sequence of mouth shapes, known as visemes, where visemes are the visual counterpart of phonemes. A single viseme typically represents a group of phonemes that are visually similar. Often these visemes are based on the static poses used in producing a phoneme. In this paper we investigate alternative representations for visemes, produced using motion-captured data, in conjunction with a constraint-based approach for visual speech production. We show that using visemes which incorporate more contextual information produces better results that using static pose visemes.
  • Item
    Craniofacial Reconstruction Based on Skull-face Models Extracted from MRI Datasets
    (The Eurographics Association, 2010) Salas, Miguel; Maddock, Steve; John Collomosse and Ian Grimstead
    We present a method for extracting skull and face models from MRI datasets and show how the resulting dataset is used in a craniofacial reconstruction (CFR) system. Datasets for 60 individuals are used to produce a database of 3D skull-face models, which are then used to give faces to unknown skulls. In addition to the skull-face geometry, other information about the individuals is known and can be used to aid the reconstruction process. The results of the system were evaluated using different criteria providing the system with different combinations of age, gender, body build and geometric skull features. Based on a surface to surface distance metric, the real and estimated faces produced were compared using different head models from the database with a leave-one-out strategy. The reconstruction scores obtained with our CFR system were comparable in magnitude (average distance less than 2.0 mm) to other craniofacial reconstruction systems. The results suggest that it is possible to obtain acceptable face estimations in a CFR system based on skull-face information derived from MRI data.