B2F: End-to-End Body-to-Face Motion Generation with Style Reference
Loading...
Date
2025
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
The Eurographics Association
Abstract
Human motion naturally integrates body movements and facial expressions, forming a unified perception. If a virtual character's facial expression does not align well with its body movements, it may weaken the perception of the character as a cohesive whole. Motivated by this, we propose B2F, a model that generates facial motions aligned with body movements. B2F takes a facial style reference as input, generating facial animations that reflect the provided style while maintaining consistency with the associated body motion. To achieve this, B2F learns a disentangled representation of content and style, using alignment and consistency-based objectives. We represent style using discrete latent codes learned via the Gumbel-Softmax trick, enabling diverse expression generation with a structured latent representation. B2F outputs facial motion in the FLAME format, making it compatible with SMPL-X characters, and supports ARKit-style avatars through a dedicated conversion module. Our evaluations show that B2F generates expressive and engaging facial animations that synchronize with body movements and style intent, while mitigating perceptual dissonance from mismatched cues, and generalizing across diverse characters and styles.
Description
@inproceedings{10.2312:pg.20251256,
booktitle = {Pacific Graphics Conference Papers, Posters, and Demos},
editor = {Christie, Marc and Han, Ping-Hsuan and Lin, Shih-Syun and Pietroni, Nico and Schneider, Teseo and Tsai, Hsin-Ruey and Wang, Yu-Shuen and Zhang, Eugene},
title = {{B2F: End-to-End Body-to-Face Motion Generation with Style Reference}},
author = {Jang, Bokyung and Jung, Eunho and Lee, Yoonsang},
year = {2025},
publisher = {The Eurographics Association},
ISBN = {978-3-03868-295-0},
DOI = {10.2312/pg.20251256}
}
