Ha, InwooAhn, Young ChunYoon, Sung-euiChristie, MarcHan, Ping-HsuanLin, Shih-SyunPietroni, NicoSchneider, TeseoTsai, Hsin-RueyWang, Yu-ShuenZhang, Eugene2025-10-072025-10-072025978-3-03868-295-0https://doi.org/10.2312/pg.20251297https://diglib.eg.org/handle/10.2312/pg20251297The demand for high frame rate rendering is rapidly increasing, especially in the graphics and gaming industries. Although recent learning-based frame interpolation methods have demonstrated promising results, they have not yet achieved the quality required for real-time gaming. High-quality frame interpolation is critical for rendering faster, dynamic motion during gameplay. In graphics, motion vectors are typically favored over optical flow due to their accuracy and efficiency in game engines. However, motion vectors alone are insufficient for frame interpolation, as they lack bilateral motions for the target frame to interpolate and struggle with capturing non-geometric movements. To address this, we propose a novel method that leverages fast, low-cost motion vectors as guiding flows, integrating them into a task-specific intermediate flow estimation process. Our approach employs a combined motion and image context encoder-decoder to produce more accurate intermediate bilateral flows. As a result, our method significantly improves interpolation quality and achieves state-of-the-art performance in rendered content.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies → Image-based renderingComputing methodologies → Imagebased renderingMotion Vector-Based Frame Generation for Real-Time Rendering10.2312/pg.2025129711 pages