A Region-Based Facial Motion Analysis and Retargeting Model for 3D Characters
| dc.contributor.author | Zhu, ChangAn | en_US |
| dc.contributor.author | Soltanpour, Sima | en_US |
| dc.contributor.author | Joslin, Chris | en_US |
| dc.contributor.editor | Christie, Marc | en_US |
| dc.contributor.editor | Han, Ping-Hsuan | en_US |
| dc.contributor.editor | Lin, Shih-Syun | en_US |
| dc.contributor.editor | Pietroni, Nico | en_US |
| dc.contributor.editor | Schneider, Teseo | en_US |
| dc.contributor.editor | Tsai, Hsin-Ruey | en_US |
| dc.contributor.editor | Wang, Yu-Shuen | en_US |
| dc.contributor.editor | Zhang, Eugene | en_US |
| dc.date.accessioned | 2025-10-07T06:02:22Z | |
| dc.date.available | 2025-10-07T06:02:22Z | |
| dc.date.issued | 2025 | |
| dc.description.abstract | With the expanding applicable scenarios of 3D facial animation, abundant research has been done on facial motion capture, 3D face parameterization, and retargeting. However, current retargeting methods still struggle to reflect the source motion on a target 3D face accurately. One major reason is that the source motion is not translated into precise representations of the motion meanings and intensities, resulting in the target 3D face presenting inaccurate motion semantics. We propose a region-based facial motion analysis and retargeting model that focuses on predicting detailed facial motion representations and providing a plausible retargeting result through 3D facial landmark input. We have defined the regions based on facial muscle behaviours and trained a motion-to-representation regression for each region. A refinement process, designed using an autoencoder and a motion predictor for facial landmarks, which works for both real-life subjects' and fictional characters' face rigs, is also introduced to improve the precision of the retargeting. The region-based strategy effectively balances the motion scales of the different facial regions, providing reliable representation prediction and retargeting results. The representation prediction and refinement with 3D facial landmark input have enabled flexible application scenarios such as video-based and marker-based motion retargeting, and the reuse of animation assets for Computer-Generated (CG) characters. Our evaluation shows that the proposed model provides semantically more accurate and visually more natural results than similar methods and the commercial solution from Faceware. Our ablation study demonstrates the positive effects of the region-based strategy and the refinement process. | en_US |
| dc.description.sectionheaders | Character Animation | |
| dc.description.seriesinformation | Pacific Graphics Conference Papers, Posters, and Demos | |
| dc.identifier.doi | 10.2312/pg.20251257 | |
| dc.identifier.isbn | 978-3-03868-295-0 | |
| dc.identifier.pages | 12 pages | |
| dc.identifier.uri | https://doi.org/10.2312/pg.20251257 | |
| dc.identifier.uri | https://diglib.eg.org/handle/10.2312/pg20251257 | |
| dc.publisher | The Eurographics Association | en_US |
| dc.rights | Attribution 4.0 International License | |
| dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
| dc.subject | CCS Concepts: Computing methodologies → Animation; Motion processing; Motion capture | |
| dc.subject | Computing methodologies → Animation | |
| dc.subject | Motion processing | |
| dc.subject | Motion capture | |
| dc.title | A Region-Based Facial Motion Analysis and Retargeting Model for 3D Characters | en_US |