Show simple item record

dc.contributor.authorLaine, Samulien_US
dc.contributor.authorKarras, Teroen_US
dc.contributor.authorAila, Timoen_US
dc.contributor.authorHerva, Anttien_US
dc.contributor.authorSaito, Shunsukeen_US
dc.contributor.authorYu, Ronalden_US
dc.contributor.authorLi, Haoen_US
dc.contributor.authorLehtinen, Jaakkoen_US
dc.contributor.editorBernhard Thomaszewski and KangKang Yin and Rahul Narainen_US
dc.date.accessioned2017-12-31T10:45:06Z
dc.date.available2017-12-31T10:45:06Z
dc.date.issued2017
dc.identifier.isbn978-1-4503-5091-4
dc.identifier.issn1727-5288
dc.identifier.urihttp://dx.doi.org/10.1145/3099564.3099581
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1145/3099564-3099581
dc.description.abstractWe present a real-time deep learning framework for video-based facial performance capture-the dense 3D tracking of an actor's face given a monocular video. Our pipeline begins with accurately capturing a subject using a high-end production facial capture pipeline based on multi-view stereo tracking and artist-enhanced animations. With 5-10 minutes of captured footage, we train a convolutional neural network to produce high-quality output, including self-occluded regions, from a monocular video sequence of that subject. Since this 3D facial performance capture is fully automated, our system can drastically reduce the amount of labor involved in the development of modern narrative-driven video games or films involving realistic digital doubles of actors and potentially hours of animated dialogue per character.We compare our results with several state-of-the-art monocular real-time facial capture techniques and demonstrate compelling animation inference in challenging areas such as eyes and lips.en_US
dc.publisherACMen_US
dc.subjectComputing methodologies
dc.subjectAnimation
dc.subjectNeural networks
dc.subjectSupervised learning by regression
dc.subjectFacial animation
dc.subjectdeep learning
dc.titleProduction-Level Facial Performance Capture Using Deep Convolutional Neural Networksen_US
dc.description.seriesinformationEurographics/ ACM SIGGRAPH Symposium on Computer Animation
dc.description.sectionheadersPapers III: Kinematic Characters
dc.identifier.doi10.1145/3099564.3099581
dc.identifier.pagesSamuli Laine, Tero Karras, Timo Aila, Antti Herva, Shunsuke Saito, Ronald Yu, Hao Li, and Jaakko Lehtinen-Computing methodologies-Animation; Neural networks; Supervised learning by regression; Facial animation, deep learning


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record