11 results
Search Results
Now showing 1 - 10 of 11
Item Aesthetically-Oriented Atmospheric Scattering(The Eurographics Association, 2019) Shen, Yang; Mallett, Ian; Shkurko, Konstantin; Kaplan, Craig S. and Forbes, Angus and DiVerdi, StephenWe present Aesthetically-Oriented Atmospheric Scattering (AOAS): an experiment into the feasibility of using real-time rendering as a tool to explore sky styles. AOAS provides an interactive design environment which enables rapid iteration cycles from concept to implementation to preview. Existing real-time rendering techniques for atmospheric scattering struggle to produce non-photorealistic sky styles within any 3D scene. To solve this problem, first, we simplify the geometric representation of atmospheric scattering to a single skydome to leverage the flexibility and simplicity of skydomes in compositing with 3D scenes. Second, we classify the essential and non-essential visual characteristics of the sky and allow AOAS to vary the latter, thus producing meaningful, non-photorealistic sky styles with real-time atmospheric scattering that are still recognizable as skies, but contain artistic stylization. We use AOAS to generate a wide variety of sky examples ranging from physical to highly stylized in appearance. The algorithm can be easily implemented on the GPU, and performs at interactive frame rates with low memory consumption and CPU usage.Item Enhancing Neural Style Transfer using Patch-Based Synthesis(The Eurographics Association, 2019) Texler, Ondřej; Fišer, Jakub; Lukáč, Mike; Lu, Jingwan; Shechtman, Eli; Sýkora, Daniel; Kaplan, Craig S. and Forbes, Angus and DiVerdi, StephenWe present a new approach to example-based style transfer which combines neural methods with patch-based synthesis to achieve compelling stylization quality even for high-resolution imagery. We take advantage of neural techniques to provide adequate stylization at the global level and use their output as a prior for subsequent patch-based synthesis at the detail level. Thanks to this combination, our method keeps the high frequencies of the original artistic media better, thereby dramatically increases the fidelity of the resulting stylized imagery. We also show how to stylize extremely large images (e.g., 340 Mpix) without the need to run the synthesis at the pixel level, yet retaining the original high-frequency details.Item Non-Photorealistic Animation for Immersive Storytelling(The Eurographics Association, 2019) Curtis, Cassidy J.; Dart, Kevin; Latzko, Theresa; Kahrs, John; Kaplan, Craig S. and Forbes, Angus and DiVerdi, StephenImmersive media such as virtual and augmented reality pose some interesting new challenges for non-photorealistic animation: we must not only balance the screen-space rules of a 2D visual style against 3D motion coherence, but also account for stereo spatialization and interactive camera movement, at a rate of 90 frames per second. We introduce two new real-time rendering techniques: MetaTexture, an example-based multiresolution texturing method that adheres to the movement of 3D geometry while maintaining a consistent level of screen-space detail, and Edge Breakup, a method for roughening edges by warping with structured noise. We show how we have used these techniques, along with art-directable coloring, shadow filtering, and shader-based texture indication, to achieve the ''moving illustration'' style of the immersive short film ''Age of Sail''.Item Video Motion Stylization by 2D Rigidification(The Eurographics Association, 2019) Delanoy, Johanna; Bousseau, Adrien; Hertzmann, Aaron; Kaplan, Craig S. and Forbes, Angus and DiVerdi, StephenThis paper introduces a video stylization method that increases the apparent rigidity of motion. Existing stylization methods often retain the 3D motion of the original video, making the result look like a 3D scene covered in paint rather than a 2D painting of a scene. In contrast, traditional hand-drawn animations often exhibit simplified in-plane motion, such as in the case of cut-out animations where the animator moves pieces of paper from frame to frame. Inspired by this technique, we propose to modify a video such that its content undergoes 2D rigid transforms. To achieve this goal, our approach applies motion segmentation and optimization to best approximate the input optical flow with piecewise-rigid transforms, and re-renders the video such that its content follows the simplified motion. The output of our method is a new video and its optical flow, which can be fed to any existing video stylization algorithm.Item Learning from Multi-domain Artistic Images for Arbitrary Style Transfer(The Eurographics Association, 2019) Xu, Zheng; Wilber, Michael; Fang, Chen; Hertzmann, Aaron; Jin, Hailin; Kaplan, Craig S. and Forbes, Angus and DiVerdi, StephenWe propose a fast feed-forward network for arbitrary style transfer, which can generate stylized image for previously unseen content and style image pairs. Besides the traditional content and style representation based on deep features and statistics for textures, we use adversarial networks to regularize the generation of stylized images. Our adversarial network learns the intrinsic property of image styles from large-scale multi-domain artistic images. The adversarial training is challenging because both the input and output of our generator are diverse multi-domain images.We use a conditional generator that stylized content by shifting the statistics of deep features, and a conditional discriminator based on the coarse category of styles. Moreover, we propose a mask module to spatially decide the stylization level and stabilize adversarial training by avoiding mode collapse. As a side effect, our trained discriminator can be applied to rank and select representative stylized images. We qualitatively and quantitatively evaluate the proposed method, and compare with recent style transfer methods. We release our code and model at https://github.com/nightldj/behance_release.Item Defining Hatching in Art(The Eurographics Association, 2019) Philbrick, Greg; Kaplan, Craig S.; Kaplan, Craig S. and Forbes, Angus and DiVerdi, StephenWe define hatching-a drawing technique-as rigorously as possible. A pure mathematical formulation or even a binary this-or-that definition is unreachable, but useful insights come from driving as close as we can. First we explain hatching's purposes. Then we define hatching as the use of patches: groups of roughly parallel curves that form flexible, simple patterns. After elaborating on this definition's parts, we briefly treat considerations for research in expressive rendering.Item Irregular Pebble Mosaics with Sub-Pebble Detail(The Eurographics Association, 2019) Javid, Ali Sattari; Doyle, Lars; Mould, David; Kaplan, Craig S. and Forbes, Angus and DiVerdi, StephenPebble mosaics convey images through an irregular tiling of rounded pebbles. Past work used relatively uniform tile sizes. We show how to create detailed representations of input photographs in a pebble mosaic style; we first create pebble shapes through a variant of k-means, then compute sub-pebble detail with textured, two-tone pebbles.We use a custom distance function to ensure that pebble sizes adapt to local detail and orient to local feature directions, for an overall effect of high fidelity to the input photograph despite the constraints of the pebble style.Item Stipple Removal in Extreme-tone Regions(The Eurographics Association, 2019) Azami, Rosa; Doyle, Lars; Mould, David; Kaplan, Craig S. and Forbes, Angus and DiVerdi, StephenConventional tone-preserving stippling struggles with extreme-tone regions. Dark regions require immense quantities of stipples, while light regions become littered with stipples that are distracting and, because of their low density, cannot communicate any image features that may be present. We propose a method to address these problems, augmenting existing stippling methods. We will cover dark regions with solid polygons rather than stipples; in light areas, we both preprocess the image to prevent stipple placement in the very lightest areas and postprocess the stipple distribution to remove stipples that contribute little to the image structure. Our modified stipple images have better visual quality than the originals despite using fewer stipples.Item Real-Time Patch-Based Stylization of Portraits Using Generative Adversarial Network(The Eurographics Association, 2019) Futschik, David; Chai, Menglei; Cao, Chen; Ma, Chongyang; Stoliar, Aleksei; Korolev, Sergey; Tulyakov, Sergey; Kučera, Michal; Sýkora, Daniel; Kaplan, Craig S. and Forbes, Angus and DiVerdi, StephenWe present a learning-based style transfer algorithm for human portraits which significantly outperforms current state-of-the-art in computational overhead while still maintaining comparable visual quality. We show how to design a conditional generative adversarial network capable to reproduce the output of Fišer et al.'s patch-based method [FJS*17] that is slow to compute but can deliver state-of-the-art visual quality. Since the resulting end-to-end network can be evaluated quickly on current consumer GPUs, our solution enables first real-time high-quality style transfer to facial videos that runs at interactive frame rates. Moreover, in cases when the original algorithmic approach of Fišer et al. fails our network can provide a more visually pleasing result thanks to generalization. We demonstrate the practical utility of our approach on a variety of different styles and target subjects.Item Robotic Painting using Semantic Image Abstraction(The Eurographics Association, 2025) Stroh, Michael; Paetzold, Patrick; Berio, Daniel; Leymarie, Frederic Fol; Kehlbeck, Rebecca; Deussen, Oliver; Berio, Daniel; Bruckert, AlexandreWe present a novel image segmentation and abstraction pipeline tailored to robot painting applications. We address the unique challenges of realizing digital abstractions as physical artistic renderings. Our approach generates adaptive, semantics-based abstractions that balance aesthetic appeal, structural coherence, and practical constraints inherent to robotic systems. By integrating panoptic segmentation with color-based over-segmentation, we partition images into meaningful regions corresponding to semantic objects while providing customizable abstraction levels we optimize for robotic realization. We employ saliency maps and color difference metrics to support automatic parameter selection to guide a merging process that detects and preserves critical object boundaries while simplifying less salient areas. Graph-based community detection further refines the abstraction by grouping regions based on local connectivity and semantic coherence. These abstractions enable robotic systems to create paintings on real canvases with a controlled level of detail and abstraction.