Loranchet, GuillaumeHellier, PierreSchnitzler, FrancoisBoukhayma, AdnaneRegateiro, JoaoMulton, FranckCeylan, DuyguLi, Tzu-Mao2025-05-092025-05-092025978-3-03868-268-41017-4656https://doi.org/10.2312/egs.20251049https://diglib.eg.org/handle/10.2312/egs20251049The creation of realistic animated avatars has become a hot-topic in both academia and the creative industry. Recent advancements in deep learning and implicit representations have opened new research avenues, particularly in enhancing avatar details with lightweight models. This paper introduces an improvement over the state-of-the-art implicit Fast-SNARF method to permit generalization to novel motions and shape identities. Fast-SNARF trains two networks: an occupancy network to predict the shape of a character in canonical space, and a Linear Blend Skinning network to deform it into arbitrary poses. However, it requires a separated model for each subject. We extend this work by conditioning both networks on an identity parameter, enabling a single model to generalize across multiple identities, without increasing the model's size, compared to Fast-SNARF.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies → Motion processing; Mesh modelsComputing methodologies → Motion processingMesh modelsImplicit Shape Avatar Generalization across Pose and Identity10.2312/egs.202510494 pages