Search Results

Now showing 1 - 4 of 4
  • Item
    Neural Smoke Stylization with Color Transfer
    (The Eurographics Association, 2020) Christen, Fabienne; Kim, Byungsoo; Azevedo, Vinicius C.; Solenthaler, Barbara; Wilkie, Alexander and Banterle, Francesco
    Artistically controlling fluid simulations requires a large amount of manual work by an artist. The recently presented transportbased neural style transfer approach simplifies workflows as it transfers the style of arbitrary input images onto 3D smoke simulations. However, the method only modifies the shape of the fluid but omits color information. In this work, we therefore extend the previous approach to obtain a complete pipeline for transferring shape and color information onto 2D and 3D smoke simulations with neural networks. Our results demonstrate that our method successfully transfers colored style features consistently in space and time to smoke data for different input textures.
  • Item
    Interactive Flat Coloring of Minimalist Neat Sketches
    (The Eurographics Association, 2020) Parakkat, Amal Dev; Madipally, Prudhviraj; Gowtham, Hari Hara; Cani, Marie-Paule; Wilkie, Alexander and Banterle, Francesco
    We introduce a simple Delaunay-triangulation based algorithm for the interactive coloring of neat line-art minimalist sketches, ie. vector sketches that may include open contours. The main objective is to minimize user intervention and make interaction as natural as with the flood-fill algorithm while extending coloring to regions with open contours. In particular, we want to save the user from worrying about parameters such as stroke weight and size. Our solution works in two steps, 1) a segmentation step in which the input sketch is automatically divided into regions based on the underlying Delaunay structure and 2) the interactive grouping of neighboring regions based on user input. More precisely, a region adjacency graph is computed from the segmentation result, and is interactively partitioned based on user input to generate the final colored sketch. Results show that our method is as natural as a bucket fill tool and powerful enough to color minimalist sketches.
  • Item
    Deep-Eyes: Fully Automatic Anime Character Colorization with Painting of Details on Empty Pupils
    (The Eurographics Association, 2020) Akita, Kenta; Morimoto, Yuki; Tsuruno, Reiji; Wilkie, Alexander and Banterle, Francesco
    Many studies have recently applied deep learning to the automatic colorization of line drawings. However, it is difficult to paint empty pupils using existing methods because the networks are trained with pupils that have edges, which are generated from color images using image processing. Most actual line drawings have empty pupils that artists must paint in. In this paper, we propose a novel network model that transfers the pupil details in a reference color image to input line drawings with empty pupils. We also propose a method for accurately and automatically coloring eyes. In this method, eye patches are extracted from a reference color image and automatically added to an input line drawing as color hints using our eye position estimation network.
  • Item
    UV Completion with Self-referenced Discrimination
    (The Eurographics Association, 2020) Kang, Jiwoo; Lee, Seongmin; Lee, Sanghoon; Wilkie, Alexander and Banterle, Francesco
    A facial UV map is used in many applications such as facial reconstruction, synthesis, recognition, and editing. However, it is difficult to collect a number of the UVs needed for accuracy using 3D scan device, or a multi-view capturing system should be required to construct the UV. An occluded facial UV with holes could be obtained by sampling an image after fitting a 3D facial model by recent alignment methods. In this paper, we introduce a facial UV completion framework to train the deep neural network with a set of incomplete UV textures. By using the fact that the facial texture distributions of the left and right half-sides are almost equal, we devise an adversarial network to model the complete UV distribution of the facial texture. Also, we propose the self-referenced discrimination scheme that uses the facial UV completed from the generator for training real distribution. It is demonstrated that the network can be trained to complete the facial texture with incomplete UVs comparably to when utilizing the ground-truth UVs.