Trajectory-guided Anime Video Synthesis via Effective Motion Learning

dc.contributor.authorLin, Jianen_US
dc.contributor.authorLi, Chengzeen_US
dc.contributor.authorQin, Haoyunen_US
dc.contributor.authorLiu, Hanyuanen_US
dc.contributor.authorLiu, Xuetingen_US
dc.contributor.authorMa, Xinen_US
dc.contributor.authorChen, Cunjianen_US
dc.contributor.authorWong, Tien-Tsinen_US
dc.contributor.editorChristie, Marcen_US
dc.contributor.editorHan, Ping-Hsuanen_US
dc.contributor.editorLin, Shih-Syunen_US
dc.contributor.editorPietroni, Nicoen_US
dc.contributor.editorSchneider, Teseoen_US
dc.contributor.editorTsai, Hsin-Rueyen_US
dc.contributor.editorWang, Yu-Shuenen_US
dc.contributor.editorZhang, Eugeneen_US
dc.date.accessioned2025-10-07T06:02:28Z
dc.date.available2025-10-07T06:02:28Z
dc.date.issued2025
dc.description.abstractCartoon and anime motion production is traditionally labor-intensive, requiring detailed animatics and extensive inbetweening from keyframes. To streamline this process, we propose a novel framework that synthesizes motion directly from a single colored keyframe, guided by user-provided trajectories. Addressing the limitations of prior methods, which struggle with anime due to reliance on optical flow estimators and models trained on natural videos, we introduce an efficient motion representation specifically adapted for anime, leveraging CoTracker to capture sparse frame-to-frame tracking effectively. To achieve our objective, we design a two-stage learning mechanism: the first stage predicts sparse motion from input frames and trajectories, generating a motion preview sequence via explicit warping; the second stage refines these previews into high-quality anime frames by fine-tuning ToonCrafter, an anime-specific video diffusion model. We train our framework on a novel animation video dataset comprising more than 500,000 clips. Experimental results demonstrate significant improvements in animating still frames, achieving better alignment with user-provided trajectories and more natural motion patterns while preserving anime stylization and visual quality. Our method also supports versatile applications, including motion manga generation and 2D vector graphic animations. The data and code will be released upon acceptance. For models, datasets and additional visual comparisons and ablation studies, visit our project page: https://animemotiontraj.github.io/.en_US
dc.description.sectionheadersCharacter Animation
dc.description.seriesinformationPacific Graphics Conference Papers, Posters, and Demos
dc.identifier.doi10.2312/pg.20251258
dc.identifier.isbn978-3-03868-295-0
dc.identifier.pages13 pages
dc.identifier.urihttps://doi.org/10.2312/pg.20251258
dc.identifier.urihttps://diglib.eg.org/handle/10.2312/pg20251258
dc.publisherThe Eurographics Associationen_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectCCS Concepts: Applied computing → Fine arts
dc.subjectApplied computing → Fine arts
dc.titleTrajectory-guided Anime Video Synthesis via Effective Motion Learningen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
pg20251258.pdf
Size:
41.76 MB
Format:
Adobe Portable Document Format