Omens, DaltonThurman, AlliseYu, JihunFedkiw, RonMasia, BelenThies, Justus2026-04-172026-04-1720261467-8659https://diglib.eg.org/handle/10.1111/cgf70417https://doi.org/10.1111/cgf70417Digital Humans: From Capture to ControlIn this paper, we consider retargeting a tracked facial performance to other people or virtual characters. We utilize the same rig framework for both tracking and animation to remove the difficulties associated with retargeting the semantics of one framework to another. Our carefully designed set of Simon-Says expressions and regularizers is used to calibrate each rig to the motion signatures of the relevant performer or target. Although a uniform set of Simon-Says expressions can likely be used for all person-to-person retargeting, we argue that person-to-virtual-character retargeting benefits from an expression set that captures the distinct motion signature of the virtual character rig. The Simon-Says calibrated rigs tend to produce the desired expressions when exercising animation controls. Unfortunately, these well-calibrated rigs still lead to undesirable controls when tracking a performance, even though they generally produce acceptable geometry reconstructions. Thus, we propose a fine-tuning approach that modifies the rig used by the tracker to promote the output of more semantically meaningful animation controls, facilitating high efficacy retargeting. To better address real-world scenarios, the fine-tuning relies on implicit differentiation so that the tracker can be treated as a potentially non-differentiable black box. Experiments demonstrate the benefits of our calibration methods on high-fidelity expressive performance retargeting for different capture conditions, trackers, and rig frameworks.CC-BY-4.0AnimationImproving Facial Rig Semantics for Tracking and Retargeting10.1111/cgf.7041712 pages