Search Results

Now showing 1 - 10 of 13
  • Item
    Neural Smoke Stylization with Color Transfer
    (The Eurographics Association, 2020) Christen, Fabienne; Kim, Byungsoo; Azevedo, Vinicius C.; Solenthaler, Barbara; Wilkie, Alexander and Banterle, Francesco
    Artistically controlling fluid simulations requires a large amount of manual work by an artist. The recently presented transportbased neural style transfer approach simplifies workflows as it transfers the style of arbitrary input images onto 3D smoke simulations. However, the method only modifies the shape of the fluid but omits color information. In this work, we therefore extend the previous approach to obtain a complete pipeline for transferring shape and color information onto 2D and 3D smoke simulations with neural networks. Our results demonstrate that our method successfully transfers colored style features consistently in space and time to smoke data for different input textures.
  • Item
    Neural Denoising for Spectral Monte Carlo Rendering
    (The Eurographics Association, 2022) Rouphael, Robin; Noizet, Mathieu; Prévost, Stéphanie; Deleau, Hervé; Steffenel, Luiz-Angelo; Lucas, Laurent; Sauvage, Basile; Hasic-Telalovic, Jasminka
    Spectral Monte Carlo (MC) rendering is still to be largely adopted partially due to the specific noise, called color noise, induced by wavelength-dependent phenomenons. Motivated by the recent advances in Monte Carlo noise reduction using Deep Learning, we propose to apply the same approach to color noise. Our implementation and training managed to reconstruct a noise-free output while conserving high-frequency details despite a loss of contrast. To address this issue, we designed a three-step pipeline using the contribution of a secondary denoiser to obtain high-quality results.
  • Item
    Is Drawing Order Important?
    (The Eurographics Association, 2023) Qiu, Sherry; Wang, Zeyu; McMillan, Leonard; Rushmeier, Holly; Dorsey, Julie; Babaei, Vahid; Skouras, Melina
    The drawing process is crucial to understanding the final result of a drawing. There has been a long history of understanding human drawing; what kinds of strokes people use and where they are placed. An area of interest in Artificial Intelligence is developing systems that simulate human behavior in drawing. However, there has been little work done to understand the order of strokes in the drawing process. Without sufficient understanding of natural drawing order, it is difficult to build models that can generate natural drawing processes. In this paper, we present a study comparing multiple types of stroke orders to confirm findings from previous work and demonstrate that multiple orderings of the same set of strokes can be perceived as human-drawn and different stroke order types achieve different perceived naturalness depending on the type of image prompt.
  • Item
    From Capture to Immersive Viewing of 3D HDR Point Clouds
    (The Eurographics Association, 2022) Loscos, Celine; Souchet, Philippe; Barrios, Théo; Valenzise, Giuseppe; Cozot, Rémi; Hahmann, Stefanie; Patow, Gustavo A.
    The collaborators of the ReVeRY project address the design of a specific grid of cameras, a cost-efficient system that acquires at once several viewpoints, possibly under several exposures and the converting of multiview, multiexposed, video stream into a high quality 3D HDR point cloud. In the last two decades, industries and researchers proposed significant advances in media content acquisition systems in three main directions: increase of resolution and image quality with the new ultra-high-definition (UHD) standard; stereo capture for 3D content; and high-dynamic range (HDR) imaging. Compression, representation, and interoperability of these new media are active research fields in order to reduce data size and be perceptually accurate. The originality of the project is to address both HDR and depth through the entire pipeline. Creativity is enhanced by several tools, which answer challenges at the different stages of the pipeline: camera setup, data processing, capture visualisation, virtual camera controller, compression, perceptually guided immersive visualisation. It is the experience acquired by the researchers of the project that is exposed in this tutorial.
  • Item
    Interactive Flat Coloring of Minimalist Neat Sketches
    (The Eurographics Association, 2020) Parakkat, Amal Dev; Madipally, Prudhviraj; Gowtham, Hari Hara; Cani, Marie-Paule; Wilkie, Alexander and Banterle, Francesco
    We introduce a simple Delaunay-triangulation based algorithm for the interactive coloring of neat line-art minimalist sketches, ie. vector sketches that may include open contours. The main objective is to minimize user intervention and make interaction as natural as with the flood-fill algorithm while extending coloring to regions with open contours. In particular, we want to save the user from worrying about parameters such as stroke weight and size. Our solution works in two steps, 1) a segmentation step in which the input sketch is automatically divided into regions based on the underlying Delaunay structure and 2) the interactive grouping of neighboring regions based on user input. More precisely, a region adjacency graph is computed from the segmentation result, and is interactively partitioned based on user input to generate the final colored sketch. Results show that our method is as natural as a bucket fill tool and powerful enough to color minimalist sketches.
  • Item
    Deep-Eyes: Fully Automatic Anime Character Colorization with Painting of Details on Empty Pupils
    (The Eurographics Association, 2020) Akita, Kenta; Morimoto, Yuki; Tsuruno, Reiji; Wilkie, Alexander and Banterle, Francesco
    Many studies have recently applied deep learning to the automatic colorization of line drawings. However, it is difficult to paint empty pupils using existing methods because the networks are trained with pupils that have edges, which are generated from color images using image processing. Most actual line drawings have empty pupils that artists must paint in. In this paper, we propose a novel network model that transfers the pupil details in a reference color image to input line drawings with empty pupils. We also propose a method for accurately and automatically coloring eyes. In this method, eye patches are extracted from a reference color image and automatically added to an input line drawing as color hints using our eye position estimation network.
  • Item
    Halftone Pattern: A New Steganographic Approach
    (The Eurographics Association, 2018) Cruz, Leandro; Patrão, Bruno; Gonçalves, Nuno; Diamanti, Olga and Vaxman, Amir
    In general, an image is worth a thousand words, but sometimes, words are the most efficient tool to communicate some information. Thereupon, in this work, we will present an approach for combining the visual appeal of images with the communication power of words. Our method is a steganographic technique to hide a textual information into an image. It is inspired by the use of dithering to create halftone images. It begins from a base image and creates the coded image by associating each base image pixel to a set of two-colors pixels (halftone) forming an appropriate pattern. The coded image is a machine readable information, with good aesthetic, secure and containing data redundancy and compression. Thus, it can be used in a variety of applications.
  • Item
    Robust Image Denoising using Kernel Predicting Networks
    (The Eurographics Association, 2021) Cai, Zhilin; Zhang, Yang; Manzi, Marco; Oztireli, Cengiz; Gross, Markus; Aydin, Tunç Ozan; Theisel, Holger and Wimmer, Michael
    We present a new method for designing high quality denoisers that are robust to varying noise characteristics of input images. Instead of taking a conventional blind denoising approach or relying on explicit noise parameter estimation networks as well as invertible camera imaging pipeline models, we propose a two-stage model that first processes an input image with a small set of specialized denoisers, and then passes the resulting intermediate denoised images to a kernel predicting network that estimates per-pixel denoising kernels. We demonstrate that our approach achieves robustness to noise parameters at a level that exceeds comparable blind denoisers, while also coming close to state-of-the-art denoising quality for camera sensor noise.
  • Item
    Fine-Grained Semantic Segmentation of Motion Capture Data using Dilated Temporal Fully-Convolutional Networks
    (The Eurographics Association, 2019) Cheema, Noshaba; hosseini, somayeh; Sprenger, Janis; Herrmann, Erik; Du, Han; Fischer, Klaus; Slusallek, Philipp; Cignoni, Paolo and Miguel, Eder
    Human motion capture data has been widely used in data-driven character animation. In order to generate realistic, naturallooking motions, most data-driven approaches require considerable efforts of pre-processing, including motion segmentation and annotation. Existing (semi-) automatic solutions either require hand-crafted features for motion segmentation or do not produce the semantic annotations required for motion synthesis and building large-scale motion databases. In addition, human labeled annotation data suffers from inter- and intra-labeler inconsistencies by design. We propose a semi-automatic framework for semantic segmentation of motion capture data based on supervised machine learning techniques. It first transforms a motion capture sequence into a ''motion image'' and applies a convolutional neural network for image segmentation. Dilated temporal convolutions enable the extraction of temporal information from a large receptive field. Our model outperforms two state-of-the-art models for action segmentation, as well as a popular network for sequence modeling. Most of all, our method is very robust under noisy and inaccurate training labels and thus can handle human errors during the labeling process.
  • Item
    Luminance-Preserving and Temporally Stable Daltonization
    (The Eurographics Association, 2023) Ebelin, Pontus; Crassin, Cyril; Denes, Gyorgy; Oskarsson, Magnus; Åström, Kalle; Akenine-Möller, Tomas; Babaei, Vahid; Skouras, Melina
    We propose a novel, real-time algorithm for recoloring images to improve the experience for a color vision deficient observer. The output is temporally stable and preserves luminance, the most important visual cue. It runs in 0.2 ms per frame on a GPU.