Search Results

Now showing 1 - 10 of 13
  • Item
    Sketch-Based Posing of 3D Faces for Facial Animation
    (The Eurographics Association, 2010) Gunnarsson, Orn; Maddock, Steve; John Collomosse and Ian Grimstead
    This paper presents a novel approach to creating 3D facial animation using a sketch-based interface where the animation is generated by interpolating a sequence of sketched key poses. The user does not need any knowledge of the underlying mechanism used to create different expressions or facial poses, and no animation controls or parameters are directly manipulated. Instead, the user sketches the desired shape of a facial feature and the system reconstructs a 3D feature which fits the sketched stroke. This is achieved using a maximum likelihood framework where a statistical model in conjunction with Hidden Markov Models handles sketch detection, and a hierarchical statistical mapping approach reconstructs a posed 3D mesh from a low-dimensional representation.
  • Item
    Sketching for Real-time Control of Crowd Simulations
    (The Eurographics Association, 2017) Gonzalez, Luis Rene Montana; Maddock, Steve; Tao Ruan Wan and Franck Vidal
    Crowd simulations are used in various fields such as entertainment, training systems and city planning. However, controlling the behaviour of the pedestrians typically involves tuning of the system parameters through trial and error, a time-consuming process relying on knowledge of a potentially complex parameter set. This paper presents an interactive graphical approach to control the simulation by sketching in the simulation environment. The user is able to sketch obstacles to block pedestrians and lines to force pedestrians to follow a specific path, as well as define spawn and exit locations for pedestrians. The obstacles and lines modify the underlying navigation representation and pedestrian trajectories are recalculated in real time. The FLAMEGPU framework is used for the simulation and the game engine Unreal is used for visualisation. We demonstrate the effectiveness of the approach using a range of scenarios, producing interactive editing and frame rates for tens of thousands of pedestrians. A comparison with the commercial software MassMotion is also given.
  • Item
    GPU Simulation of Finite Element Facial Soft-Tissue Models
    (The Eurographics Association, 2013) Warburton, Mark; Maddock, Steve; Silvester Czanner and Wen Tang
    Physically-based animation techniques enable more realistic and accurate animation to be created. We present a GPU-based finite element (FE) simulation and interactive visualisation system for efficiently producing realisticlooking animations of facial movement, including expressive wrinkles. It is optimised for simulating multi-layered voxel-based models using the total Lagrangian explicit dynamic (TLED) FE method. The flexibility of our system enables detailed animations of gross and fine-scale soft-tissue movement to be easily produced with different muscle structures and material parameters. While we focus on the forehead, the system can be used to animate any multi-material soft body.
  • Item
    Breathing Life into Statues Using Augmented Reality
    (The Eurographics Association, 2020) Ioannou, Eleftherios; Maddock, Steve; Ritsos, Panagiotis D. and Xu, Kai
    AR art is a relatively recent phenomenon, one that brings innovation in the way that artworks can be produced and presented in real-world locations and environments. We present an AR art app, running in real time on a smartphone, that can be used to bring to life inanimate objects such as statues. The work relies on a virtual copy of the real object, which is produced using photogrammetry, as well as a skeleton rig for subsequent animation. As part of the work, we present a new diminishing reality technique, based on the use of particle systems, to make the real object 'disappear' and be replaced by the animating virtual copy, effectively animating the inanimate. The approach is demonstrated on two objects: a juice carton and a small giraffe sculpture.
  • Item
    PED: Pedestrian Environment Designer
    (The Eurographics Association, 2016) McIlveen, James; Maddock, Steve; Heywood, Peter; Richmond, Paul; Cagatay Turkay and Tao Ruan Wan
    Pedestrian simulations have many uses, from pedestrian planning for architecture design through to games and entertainment. However, it is still challenging to efficiently author such simulations, especially for non-technical users. Direct pedestrian control is usually laborious, and, while indirect, environment-level control is often faster, it currently lacks the necessary tools to create complex environments easily and without extensive prior technical knowledge. This paper describes an indirect, environment-level control system in which pedestrians' behaviour can be specified efficiently and then interactively tuned. With the Pedestrian Environment Designer (PED) interface, authors can define environments using tools similar to those found in raster graphics editing software such as PhotoshopTM. Users paint on two-dimensional bitmap layers to control the behaviour of pedestrians in a three-dimensional simulation. The layers are then compiled to produce a live, agent-based pedestrian simulation using the FLAME GPU framework. Entrances and exits can be inserted, collision boundaries defined, and areas of attraction and avoidance added. The system also offers dynamic simulation updates at runtime giving immediate author feedback and enabling authors to simulate scenarios with dynamic elements such as barriers, or dynamic circumstances such as temporary areas of avoidance. As a result, authors are able to create complex crowd simulations more effectively and with minimal training.
  • Item
    Evaluation of A Viseme-Driven Talking Head
    (The Eurographics Association, 2010) Dey, Priya; Maddock, Steve; Nicolson, Rod; John Collomosse and Ian Grimstead
    This paper introduces a three-dimensional virtual head for use in speech tutoring applications. The system achieves audiovisual speech synthesis using viseme-driven animation and a coarticulation model, to automatically generate speech from text. The talking head was evaluated using a modified rhyme test for intelligibility. The audiovisual speech animation was found to give higher intelligibility of isolated words than acoustic speech alone.
  • Item
    Using Semi-automatic 3D Scene Reconstruction to Create a Digital Medieval Charnel Chapel
    (The Eurographics Association, 2016) Shui, Wuyang; Maddock, Steve; Heywood, Peter; Craig-Atkins, Elizabeth; Crangle, Jennifer; Hadley, Dawn; Scott, Rab; Cagatay Turkay and Tao Ruan Wan
    The use of a terrestrial laser scanner (TLS) has become a popular technique for the acquisition of 3D scenes in the fields of cultural heritage and archaeology. In this study, a semi-automatic reconstruction technique is presented to convert the point clouds that are produced, which often contain noise or are missing data, into a set of triangle meshes. The technique is applied to the reconstruction of a medieval charnel chapel. To reduce the computational complexity of reconstruction, the point cloud is first segmented into several components guided by the geometric structure of the scene. Landmarks are interactively marked on the point cloud and multiple cutting planes are created using the least squares method. Then, sampled point clouds for each component are meshed by ball-pivoting. In order to fill the large missing regions on the walls and ground plane, inserted triangle meshes are calculated on the basis of the convex hull of the projection points on the bounding plane. The iterative closest point (ICP) approach and local non-rigid registration methods are used to make the inserted triangle meshes and original model tightly match. Using these methods, we have reconstructed a digital model of the medieval charnel chapel, which not only serves to preserve a digital record of it, but also enables members of t he public to experience the space virtually.
  • Item
    Audio-Visual Animation of Urban Space
    (The Eurographics Association, 2010) Richmond, Paul; Smyrnova, Yuliya; Maddock, Steve; Kang, Jian; John Collomosse and Ian Grimstead
    We present a technique for simulating accurate physically modelled acoustics within an outdoor urban environment and a tool that presents the acoustics alongside a visually rendered counterpart. Acoustic modelling is achieved by using a mixture of simulating ray-traced specular sound wave reflections and applying radiosity to simulate diffuse reflections. Sound rendering is applied to the energy response of the acoustic modelling stage and is used to produce a number of binaural samples for playback with headphones. The visual tool which has been created unites the acoustic renderings with an accurate 3D representation of the virtual environment. As part of this tool an interpolation technique has been implemented allowing a user controlled walkthrough of the simulated environment. This produces better sound localisation effects than listening from a set number of static locations.
  • Item
    Physically-based Sticky Lips
    (The Eurographics Association, 2018) Leach, Matthew; Maddock, Steve; {Tam, Gary K. L. and Vidal, Franck
    Abstract In this paper, a novel solution is provided for the sticky lip problem in computer facial animation, recreating the way the lips stick together when drawn apart in speech or in the formation of facial expressions. Traditional approaches to modelling this rely on an artist estimating the correct behaviour. In contrast, this paper presents a physically-based model. The mouth is modelled using the total Lagrangian explicit dynamics finite element method, with a new breaking element modelling the saliva between the lips. With this approach, subtle yet complex behaviours are recreated implicitly, giving rise to more realistic movements of the lips. The model is capable of reproducing varying degrees of stickiness between the lips, as well as asymmetric effects.
  • Item
    A Statistically-Assisted Sketch-Based Interface for Creating Arbitrary 3-dimensional Faces
    (The Eurographics Association, 2007) Gunnarsson, Orn; Maddock, Steve; Ik Soo Lim and David Duce
    Creating faces is important in a number of application areas. Faces can be constructed using commercial modelling tools, existing faces can be transferred to a digital form using equipment such as laser scanners, and law enforcement agencies use sketch artists and photo-fit software to produce faces of suspects. We present a technique that can create a 3-dimensional head using intuitive, artistic 2-dimensional sketching techniques. Our work involves bringing together two types of graphics applications: sketching interfaces and systems used to create 3-dimensional faces, through the mediation of a statistical model. We present our results where we sketch a nose and search for a geometric face model in a database whose nose best matches the sketched nose.