Manfredi, GildaCapece, NicolaErra, UgoBanterle, FrancescoCaggianese, GiuseppeCapece, NicolaErra, UgoLupinetti, KatiaManfredi, Gilda2023-11-122023-11-122023978-3-03868-235-62617-4855https://doi.org/10.2312/stag.20231304https://diglib.eg.org:443/handle/10.2312/stag20231304Creating realistic avatars that faithfully replicate facial features from single-input images is a challenging task in computer graphics, virtual communication, and interactive entertainment. These avatars have the potential to revolutionize virtual experiences by enhancing user engagement and personalization. However, existing methods, such as 3D facial capture systems, are costly and complex. Our approach adopts the 3D Morphable Face Model (3DMM) method to create avatars with remarkably realistic features in a bunch of seconds, using only a single input image. Our method extends beyond facial shape resemblance; it meticulously generates both facial and bodily textures, enhancing overall likeness. Within Unreal Engine 5, our avatars come to life with real-time body and facial animations. This is made possible through a versatile skeleton for body and head movements and a suite of 52 face blendshapes, enabling the avatar to convey emotions and expressions with fidelity. This poster presents our approach, bridging the gap between reality and virtual representation, and opening doors to immersive virtual experiences with lifelike avatars.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies -> Computer vision; Machine learning; Parametric curve and surface models; TexturingComputing methodologiesComputer visionMachine learningParametric curve and surface modelsTexturingAvatarizeMe: A Fast Software Tool for Transforming Selfies into Animatable Lifelike Avatars Using Machine Learning10.2312/stag.20231304153-1553 pages