Casado-Elvira, AndrésComino Trinidad, MarcCasas, DanDominik L. MichelsSoeren Pirk2022-08-102022-08-1020221467-8659https://doi.org/10.1111/cgf.14644https://diglib.eg.org:443/handle/10.1111/cgf14644Clothing plays a fundamental role in digital humans. Current approaches to animate 3D garments are mostly based on realistic physics simulation, however, they typically suffer from two main issues: high computational run-time cost, which hinders their deployment; and simulation-to-real gap, which impedes the synthesis of specific real-world cloth samples. To circumvent both issues we propose PERGAMO, a data-driven approach to learn a deformable model for 3D garments from monocular images. To this end, we first introduce a novel method to reconstruct the 3D geometry of garments from a single image, and use it to build a dataset of clothing from monocular videos. We use these 3D reconstructions to train a regression model that accurately predicts how the garment deforms as a function of the underlying body pose. We show that our method is capable of producing garment animations that match the real-world behavior, and generalizes to unseen body motions extracted from motion capture dataset.CCS Concepts: Computing methodologies --> Computer graphics; Neural networksComputing methodologiesComputer graphicsNeural networksPERGAMO: Personalized 3D Garments from Monocular Video10.1111/cgf.14644293-30412 pages