Wang, ZeyuWang, Tuanfeng Y.Dorsey, JulieYang, YinParakkat, Amal D.Deng, BailinNoh, Seung-Tak2022-10-042022-10-042022978-3-03868-190-8https://doi.org/10.2312/pg.20221237https://diglib.eg.org:443/handle/10.2312/pg20221237Most non-photorealistic rendering (NPR) methods for line drawing synthesis operate on a static shape. They are not tailored to process animated 3D models due to extensive per-frame parameter tuning needed to achieve the intended look and natural transition. This paper introduces a framework for interactive line drawing synthesis from animated 3D models based on a learned style space for drawing representation and interpolation. We refer to style as the relationship between stroke placement in a line drawing and its corresponding geometric properties. Starting from a given sequence of an animated 3D character, a user creates drawings for a set of keyframes. Our system embeds the raster drawings into a latent style space after they are disentangled from the underlying geometry. By traversing the latent space, our system enables a smooth transition between the input keyframes. The user may also edit, add, or remove the keyframes interactively, similar to a typical keyframe-based workflow. We implement our system with deep neural networks trained on synthetic line drawings produced by a combination of NPR methods. Our drawing-specific supervision and optimization-based embedding mechanism allow generalization from NPR line drawings to user-created drawings during run time. Experiments show that our approach generates high-quality line drawing animations while allowing interactive control of the drawing style across frames.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies → Non-photorealistic rendering; Animation; Learning latent representationsComputing methodologies → Non photorealistic renderingAnimationLearning latent representationsLearning a Style Space for Interactive Line Drawing Synthesis from Animated 3D Models10.2312/pg.202212371-66 pages