Tanveer, MahamZhou, YangNiklaus, SimonMahdavi Amiri, AliZhang, Hao (Richard)Singh, Krishna KumarZhao, NanxuanMasia, BelenThies, Justus2026-04-172026-04-1720261467-8659https://diglib.eg.org/handle/10.1111/cgf70362https://doi.org/10.1111/cgf.70362Video inbetweening creates smooth transitions between two frames making it an indispensable tool for video editing and longform video synthesis. Existing methods struggle with large or complex motion and offer limited control over intermediate frames, often misaligning with user intent. We introduce MultiCOIN, a video inbetweening framework supporting multi-modal controls, including depth transitions and layering, motion trajectories, text prompts, and target regions for movement localization. It balances flexibility, usability, and fine-grained precision. Built on a Diffusion Transformer (DiT), due to its proven capability to generate high-quality long video, our model maps all motion controls into a unified sparse point-based representation compatible with the denoising process. Further, to respect the variety of controls which operate at varying levels of granularity and influence, we separate content and motion into two branches, enabling dedicated generators for each. A stage-wise training strategy ensures stable learning of multi-modal controls. Extensive experiments show improved motion complexity, controllability, and narrative consistency. Project Page: MultiCOIN.CC-BY-4.0Computer vision tasksMultiCOIN: Multi-Modal COntrollable INbetweening10.1111/cgf.7036211 pages