Monocular Facial Performance Capture Via Deep Expression Matching

dc.contributor.authorBailey, Stephen W.en_US
dc.contributor.authorRiviere, Jérémyen_US
dc.contributor.authorMikkelsen, Mortenen_US
dc.contributor.authorO'Brien, James F.en_US
dc.contributor.editorDominik L. Michelsen_US
dc.contributor.editorSoeren Pirken_US
dc.date.accessioned2022-08-10T15:19:53Z
dc.date.available2022-08-10T15:19:53Z
dc.date.issued2022
dc.description.abstractFacial performance capture is the process of automatically animating a digital face according to a captured performance of an actor. Recent developments in this area have focused on high-quality results using expensive head-scanning equipment and camera rigs. These methods produce impressive animations that accurately capture subtle details in an actor's performance. However, these methods are accessible only to content creators with relatively large budgets. Current methods using inexpensive recording equipment generally produce lower quality output that is unsuitable for many applications. In this paper, we present a facial performance capture method that does not require facial scans and instead animates an artist-created model using standard blendshapes. Furthermore, our method gives artists high-level control over animations through a workflow similar to existing commercial solutions. Given a recording, our approach matches keyframes of the video with corresponding expressions from an animated library of poses. A Gaussian process model then computes the full animation by interpolating from the set of matched keyframes. Our expression-matching method computes a low-dimensional latent code from an image that represents a facial expression while factoring out the facial identity. Images depicting similar facial expressions are identified by their proximity in the latent space. In our results, we demonstrate the fidelity of our expression-matching method. We also compare animations generated with our approach to animations generated with commercially available software.en_US
dc.description.number8
dc.description.sectionheadersCapture, Tracking, and Facial Animation
dc.description.seriesinformationComputer Graphics Forum
dc.description.volume41
dc.identifier.doi10.1111/cgf.14639
dc.identifier.issn1467-8659
dc.identifier.pages243-254
dc.identifier.pages12 pages
dc.identifier.urihttps://doi.org/10.1111/cgf.14639
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf14639
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectCCS Concepts: Computing methodologies --> Animation; Neural networks
dc.subjectComputing methodologies
dc.subjectAnimation
dc.subjectNeural networks
dc.titleMonocular Facial Performance Capture Via Deep Expression Matchingen_US
Files
Original bundle
Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
v41i8pp243-254.pdf
Size:
25.55 MB
Format:
Adobe Portable Document Format
No Thumbnail Available
Name:
monocular_facial_performace_capture_via_deep_expression_matching.mp4
Size:
106.97 MB
Format:
Unknown data format
Collections