Animation Reconstruction of Deformable Surfaces
Item/paper (currently) not available via TIB Hannover.
MetadataShow full item record
<p> Accurate and reliable 3D digitization of dynamic shapes is a critical component in the creation of compelling CG animations. Digitizing deformable surfaces has applications ranging from robotics, biomedicine, education to interactive games and film production. Markerless 3D acquisition technologies, in the form of continuous high-resolution scan sequences, are becoming increasingly widespread and not only capture static shapes, but also entire performances. However, due to the lack of inter-frame correspondences, the potential gains offered by these systems (such as recovery of fine-scale dynamics) have yet to be tapped. The primary purpose of this dissertation is to investigate foundational algorithms and frameworks that reliably compute these correspondences in order to obtain a complete digital representation of deforming surfaces from acquired data. We further our explorations in an important subfield of computer graphics, the realistic animation of human faces, and develop a full system for realtime markerless facial tracking and expression transfer to arbitrary characters. To this end, we complement our framework with a new automatic rigging tool which offers an intuitive way for instrumenting captured facial animations. We begin our investigation by addressing the fundamental problem of non-rigid registration which establishes correspondences between incomplete scans of deforming surfaces. A robust algorithm is presented that tightly couples correspondence estimation and surface deformation within a single global optimization. With this approach, we break the dependency between both computations and achieve warps with considerably higher global spatial consistency than existing methods. We further corroborate the decisive aspects of using a non-linear space-time adaptive deformation model that maximizes local rigidity and an optimization procedure that systematically reduces stiffness.While recent advances in acquisition technology have made high-quality real-time 3D capture possible, surface regions occluded by the sensors cannot be captured. In this respect, we propose two distinct avenues for dynamic shape reconstruction. Our first approach consists of a bi-resolution framework which employs a smooth template model as a geometric and topological prior. While large-scale motions are recovered using non-rigid registration, fine-scale details are synthesized using a linear mesh deformation algorithm. We show how a detail aggregation and filtering procedure allows the transfer of persistent geometric details to regions that are not visible by the scanner. The second framework considers temporally-coherent shape completion as the primary target and skips the requirement of establishing a consistent parameterization through time. The main benefit is that the method does not require a template model and is not susceptible to error accumulations. This is because the correspondence estimations are localized within a time window.The second part of this dissertation focuses on the animation reconstruction of realistic human faces. We present a complete integrated system for live facial puppetry that enables compelling facial expression tracking with transfer to another person's face. Even with just a single rigid pose of the target face, convincing facial animations are achievable and easy to control by an actor. We accomplish real-time performance through dimensionality reduction and by carefully shifting the complexity of online computation toward offline pre-processing. To facilitate the manipulation of reconstructed facial animations, we introduce a method for generating facial blendshape rigs from a set of example poses of a CG character. The algorithm transfers controller semantics from a generic rig to the target blendshape model while solving for an optimal reproduction of the training poses. We show the advantages of phrasing the optimization in gradient space and demonstrate the performance of the system in the context of art-directable facial tracking.The performance of our methods are evaluated using two state of the art real-time acquisition systems (based on structured light and multi-view photometric stereo). </p>