6 results
Search Results
Now showing 1 - 6 of 6
Item Neural Smoke Stylization with Color Transfer(The Eurographics Association, 2020) Christen, Fabienne; Kim, Byungsoo; Azevedo, Vinicius C.; Solenthaler, Barbara; Wilkie, Alexander and Banterle, FrancescoArtistically controlling fluid simulations requires a large amount of manual work by an artist. The recently presented transportbased neural style transfer approach simplifies workflows as it transfers the style of arbitrary input images onto 3D smoke simulations. However, the method only modifies the shape of the fluid but omits color information. In this work, we therefore extend the previous approach to obtain a complete pipeline for transferring shape and color information onto 2D and 3D smoke simulations with neural networks. Our results demonstrate that our method successfully transfers colored style features consistently in space and time to smoke data for different input textures.Item Interactive Flat Coloring of Minimalist Neat Sketches(The Eurographics Association, 2020) Parakkat, Amal Dev; Madipally, Prudhviraj; Gowtham, Hari Hara; Cani, Marie-Paule; Wilkie, Alexander and Banterle, FrancescoWe introduce a simple Delaunay-triangulation based algorithm for the interactive coloring of neat line-art minimalist sketches, ie. vector sketches that may include open contours. The main objective is to minimize user intervention and make interaction as natural as with the flood-fill algorithm while extending coloring to regions with open contours. In particular, we want to save the user from worrying about parameters such as stroke weight and size. Our solution works in two steps, 1) a segmentation step in which the input sketch is automatically divided into regions based on the underlying Delaunay structure and 2) the interactive grouping of neighboring regions based on user input. More precisely, a region adjacency graph is computed from the segmentation result, and is interactively partitioned based on user input to generate the final colored sketch. Results show that our method is as natural as a bucket fill tool and powerful enough to color minimalist sketches.Item Deep-Eyes: Fully Automatic Anime Character Colorization with Painting of Details on Empty Pupils(The Eurographics Association, 2020) Akita, Kenta; Morimoto, Yuki; Tsuruno, Reiji; Wilkie, Alexander and Banterle, FrancescoMany studies have recently applied deep learning to the automatic colorization of line drawings. However, it is difficult to paint empty pupils using existing methods because the networks are trained with pupils that have edges, which are generated from color images using image processing. Most actual line drawings have empty pupils that artists must paint in. In this paper, we propose a novel network model that transfers the pupil details in a reference color image to input line drawings with empty pupils. We also propose a method for accurately and automatically coloring eyes. In this method, eye patches are extracted from a reference color image and automatically added to an input line drawing as color hints using our eye position estimation network.Item Halftone Pattern: A New Steganographic Approach(The Eurographics Association, 2018) Cruz, Leandro; Patrão, Bruno; Gonçalves, Nuno; Diamanti, Olga and Vaxman, AmirIn general, an image is worth a thousand words, but sometimes, words are the most efficient tool to communicate some information. Thereupon, in this work, we will present an approach for combining the visual appeal of images with the communication power of words. Our method is a steganographic technique to hide a textual information into an image. It is inspired by the use of dithering to create halftone images. It begins from a base image and creates the coded image by associating each base image pixel to a set of two-colors pixels (halftone) forming an appropriate pattern. The coded image is a machine readable information, with good aesthetic, secure and containing data redundancy and compression. Thus, it can be used in a variety of applications.Item Fine-Grained Semantic Segmentation of Motion Capture Data using Dilated Temporal Fully-Convolutional Networks(The Eurographics Association, 2019) Cheema, Noshaba; hosseini, somayeh; Sprenger, Janis; Herrmann, Erik; Du, Han; Fischer, Klaus; Slusallek, Philipp; Cignoni, Paolo and Miguel, EderHuman motion capture data has been widely used in data-driven character animation. In order to generate realistic, naturallooking motions, most data-driven approaches require considerable efforts of pre-processing, including motion segmentation and annotation. Existing (semi-) automatic solutions either require hand-crafted features for motion segmentation or do not produce the semantic annotations required for motion synthesis and building large-scale motion databases. In addition, human labeled annotation data suffers from inter- and intra-labeler inconsistencies by design. We propose a semi-automatic framework for semantic segmentation of motion capture data based on supervised machine learning techniques. It first transforms a motion capture sequence into a ''motion image'' and applies a convolutional neural network for image segmentation. Dilated temporal convolutions enable the extraction of temporal information from a large receptive field. Our model outperforms two state-of-the-art models for action segmentation, as well as a popular network for sequence modeling. Most of all, our method is very robust under noisy and inaccurate training labels and thus can handle human errors during the labeling process.Item UV Completion with Self-referenced Discrimination(The Eurographics Association, 2020) Kang, Jiwoo; Lee, Seongmin; Lee, Sanghoon; Wilkie, Alexander and Banterle, FrancescoA facial UV map is used in many applications such as facial reconstruction, synthesis, recognition, and editing. However, it is difficult to collect a number of the UVs needed for accuracy using 3D scan device, or a multi-view capturing system should be required to construct the UV. An occluded facial UV with holes could be obtained by sampling an image after fitting a 3D facial model by recent alignment methods. In this paper, we introduce a facial UV completion framework to train the deep neural network with a set of incomplete UV textures. By using the fact that the facial texture distributions of the left and right half-sides are almost equal, we devise an adversarial network to model the complete UV distribution of the facial texture. Also, we propose the self-referenced discrimination scheme that uses the facial UV completed from the generator for training real distribution. It is demonstrated that the network can be trained to complete the facial texture with incomplete UVs comparably to when utilizing the ground-truth UVs.