A Region-Based Facial Motion Analysis and Retargeting Model for 3D Characters

dc.contributor.authorZhu, ChangAnen_US
dc.contributor.authorSoltanpour, Simaen_US
dc.contributor.authorJoslin, Chrisen_US
dc.contributor.editorChristie, Marcen_US
dc.contributor.editorHan, Ping-Hsuanen_US
dc.contributor.editorLin, Shih-Syunen_US
dc.contributor.editorPietroni, Nicoen_US
dc.contributor.editorSchneider, Teseoen_US
dc.contributor.editorTsai, Hsin-Rueyen_US
dc.contributor.editorWang, Yu-Shuenen_US
dc.contributor.editorZhang, Eugeneen_US
dc.date.accessioned2025-10-07T06:02:22Z
dc.date.available2025-10-07T06:02:22Z
dc.date.issued2025
dc.description.abstractWith the expanding applicable scenarios of 3D facial animation, abundant research has been done on facial motion capture, 3D face parameterization, and retargeting. However, current retargeting methods still struggle to reflect the source motion on a target 3D face accurately. One major reason is that the source motion is not translated into precise representations of the motion meanings and intensities, resulting in the target 3D face presenting inaccurate motion semantics. We propose a region-based facial motion analysis and retargeting model that focuses on predicting detailed facial motion representations and providing a plausible retargeting result through 3D facial landmark input. We have defined the regions based on facial muscle behaviours and trained a motion-to-representation regression for each region. A refinement process, designed using an autoencoder and a motion predictor for facial landmarks, which works for both real-life subjects' and fictional characters' face rigs, is also introduced to improve the precision of the retargeting. The region-based strategy effectively balances the motion scales of the different facial regions, providing reliable representation prediction and retargeting results. The representation prediction and refinement with 3D facial landmark input have enabled flexible application scenarios such as video-based and marker-based motion retargeting, and the reuse of animation assets for Computer-Generated (CG) characters. Our evaluation shows that the proposed model provides semantically more accurate and visually more natural results than similar methods and the commercial solution from Faceware. Our ablation study demonstrates the positive effects of the region-based strategy and the refinement process.en_US
dc.description.sectionheadersCharacter Animation
dc.description.seriesinformationPacific Graphics Conference Papers, Posters, and Demos
dc.identifier.doi10.2312/pg.20251257
dc.identifier.isbn978-3-03868-295-0
dc.identifier.pages12 pages
dc.identifier.urihttps://doi.org/10.2312/pg.20251257
dc.identifier.urihttps://diglib.eg.org/handle/10.2312/pg20251257
dc.publisherThe Eurographics Associationen_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectCCS Concepts: Computing methodologies → Animation; Motion processing; Motion capture
dc.subjectComputing methodologies → Animation
dc.subjectMotion processing
dc.subjectMotion capture
dc.titleA Region-Based Facial Motion Analysis and Retargeting Model for 3D Charactersen_US
Files
Original bundle
Now showing 1 - 3 of 3
Loading...
Thumbnail Image
Name:
pg20251257.pdf
Size:
3.69 MB
Format:
Adobe Portable Document Format
No Thumbnail Available
Name:
paper1325_mm1.mp4
Size:
304.91 MB
Format:
Video MP4
Loading...
Thumbnail Image
Name:
paper1325_mm2.pdf
Size:
60.33 KB
Format:
Adobe Portable Document Format