Mailee, HamilaAnjos, Rafael Kuffner dosBerio, DanielBruckert, Alexandre2025-05-092025-05-092025978-3-03868-272-1https://doi.org/10.2312/exw.20251067https://diglib.eg.org/handle/10.2312/exw20251067Automating facial expression changes in comics and 2D animation presents several challenges, as facial structures can vary widely, and audiences are susceptible to the subtlest changes. Building on extensive research in human face image manipulation, landmark-guided image editing offers a promising solution, providing precise control and yielding satisfactory results. This study addresses the challenges hindering the advancement of landmark-based methods for cartoon characters and proposes the use of object detection models -specifically YOLOX and Faster R-CNN- to detect initial facial regions. These detections serve as a foundation for expanding landmark annotations, enabling more effective expression manipulation to animate expressive characters. The codes and trained models are publicly available here.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies → Interest point and salient region detections; Object detectionComputing methodologies → Interest point and salient region detectionsObject detectionTowards Automated 2D Character Animation10.2312/exw.202510674 pages