Hauptfleisch, FilipTexler, OndrejTexler, AnetaKrivánek, JaroslavSýkora, DanielEisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue2020-10-292020-10-2920201467-8659https://doi.org/10.1111/cgf.14169https://diglib.eg.org:443/handle/10.1111/cgf14169We present a novel approach to the real-time non-photorealistic rendering of 3D models in which a single hand-drawn exemplar specifies its appearance. We employ guided patch-based synthesis to achieve high visual quality as well as temporal coherence. However, unlike previous techniques that maintain consistency in one dimension (temporal domain), in our approach, multiple dimensions are taken into account to cover all degrees of freedom given by the available space of interactions (e.g., camera rotations). To enable interactive experience, we precalculate a sparse latent representation of the entire interaction space, which allows rendering of a stylized image in real-time, even on a mobile device. To the best of our knowledge, the proposed system is the first that enables interactive example-based stylization of 3D models with full temporal coherence in predefined interaction space.Computing methodologiesNonphotorealistic renderingStyleProp: Real-time Example-based Stylization of 3D Models10.1111/cgf.14169575-586