Wolff, TonyDollack, FelixPerusquia-Hernandez, MonicaUchiyama, HideakiKiyokawa, KiyoshiTanabe, TakeshiYem, Vibol2024-11-292024-11-292024978-3-03868-246-21727-530Xhttps://doi.org/10.2312/egve.20241392https://diglib.eg.org/handle/10.2312/egve20241392Action Units and blend shapes are two frameworks to describe facial movement. However, mappings between the two frameworks are underinvestigated. We present an automated mapping technique using machine learning. Our model infers ARKitcompatible blend shape weights from action unit intensities extracted with OpenFace. We use a GRU architecture to retain time-dependent information leveraging the particularities of Recurrent Neural Networks while still permitting fast, real-time inference. Our generalized model yields an activation precision of 90% and an activation recall of 85%.Attribution 4.0 International LicenseCCS Concepts: Human-centered computing → Virtual reality; Computing methodologies → Machine learning algorithmsHuman centered computing → Virtual realityComputing methodologies → Machine learning algorithmsMapping of Facial Action Units to Virtual Avatar Blend Shape Movement10.2312/egve.202413922 pages