8 results
Search Results
Now showing 1 - 8 of 8
Item Frequency-Aware Reconstruction of Fluid Simulations with Generative Networks(The Eurographics Association, 2020) Biland, Simon; Azevedo, Vinicius C.; Kim, Byungsoo; Solenthaler, Barbara; Wilkie, Alexander and Banterle, FrancescoConvolutional neural networks were recently employed to fully reconstruct fluid simulation data from a set of reduced parameters. However, since (de-)convolutions traditionally trained with supervised l1-loss functions do not discriminate between low and high frequencies in the data, the error is not minimized efficiently for higher bands. This directly correlates with the quality of the perceived results, since missing high frequency details are easily noticeable. In this paper, we analyze the reconstruction quality of generative networks and present a frequency-aware loss function that is able to focus on specific bands of the dataset during training time. We show that our approach improves reconstruction quality of fluid simulation data in mid-frequency bands, yielding perceptually better results while requiring comparable training time.Item Neural Smoke Stylization with Color Transfer(The Eurographics Association, 2020) Christen, Fabienne; Kim, Byungsoo; Azevedo, Vinicius C.; Solenthaler, Barbara; Wilkie, Alexander and Banterle, FrancescoArtistically controlling fluid simulations requires a large amount of manual work by an artist. The recently presented transportbased neural style transfer approach simplifies workflows as it transfers the style of arbitrary input images onto 3D smoke simulations. However, the method only modifies the shape of the fluid but omits color information. In this work, we therefore extend the previous approach to obtain a complete pipeline for transferring shape and color information onto 2D and 3D smoke simulations with neural networks. Our results demonstrate that our method successfully transfers colored style features consistently in space and time to smoke data for different input textures.Item Adversarial Generation of Continuous Implicit Shape Representations(The Eurographics Association, 2020) Kleineberg, Marian; Fey, Matthias; Weichert, Frank; Wilkie, Alexander and Banterle, FrancescoThis work presents a generative adversarial architecture for generating three-dimensional shapes based on signed distance representations. While the deep generation of shapes has been mostly tackled by voxel and surface point cloud approaches, our generator learns to approximate the signed distance for any point in space given prior latent information. Although structurally similar to generative point cloud approaches, this formulation can be evaluated with arbitrary point density during inference, leading to fine-grained details in generated outputs. Furthermore, we study the effects of using either progressively growing voxel- or point-processing networks as discriminators, and propose a refinement scheme to strengthen the generator's capabilities in modeling the zero iso-surface decision boundary of shapes. We train our approach on the SHAPENET benchmark dataset and validate, both quantitatively and qualitatively, its performance in generating realistic 3D shapes.Item Deep Learning for Graphics(The Eurographics Association, 2018) Mitra, Niloy J.; Ritschel, Tobias; Kokkinos, Iasonas; Guerrero, Paul; Kim, Vladimir; Rematas, Konstantinos; Yumer, Ersin; Ritschel, Tobias and Telea, AlexandruIn computer graphics, many traditional problems are now better handled by deep-learning based data-driven methods. In applications that operate on regular 2D domains, like image processing and computational photography, deep networks are state-of-the-art, beating dedicated hand-crafted methods by significant margins. More recently, other domains such as geometry processing, animation, video processing, and physical simulations have benefited from deep learning methods as well. The massive volume of research that has emerged in just a few years is often difficult to grasp for researchers new to this area. This tutorial gives an organized overview of core theory, practice, and graphics-related applications of deep learning.Item Learning Generative Models of 3D Structures(The Eurographics Association and John Wiley & Sons Ltd., 2020) Chaudhuri, Siddhartha; Ritchie, Daniel; Wu, Jiajun; Xu, Kai; Zhang, Hao; Mantiuk, Rafal and Sundstedt, Veronica3D models of objects and scenes are critical to many academic disciplines and industrial applications. Of particular interest is the emerging opportunity for 3D graphics to serve artificial intelligence: computer vision systems can benefit from syntheticallygenerated training data rendered from virtual 3D scenes, and robots can be trained to navigate in and interact with real-world environments by first acquiring skills in simulated ones. One of the most promising ways to achieve this is by learning and applying generative models of 3D content: computer programs that can synthesize new 3D shapes and scenes. To allow users to edit and manipulate the synthesized 3D content to achieve their goals, the generative model should also be structure-aware: it should express 3D shapes and scenes using abstractions that allow manipulation of their high-level structure. This state-of-theart report surveys historical work and recent progress on learning structure-aware generative models of 3D shapes and scenes. We present fundamental representations of 3D shape and scene geometry and structures, describe prominent methodologies including probabilistic models, deep generative models, program synthesis, and neural networks for structured data, and cover many recent methods for structure-aware synthesis of 3D shapes and indoor scenes.Item Learning Generative Models of 3D Structures(The Eurographics Association, 2019) Chaudhuri, Siddhartha; Ritchie, Daniel; Xu, Kai; Zhang, Hao (Richard); Jakob, Wenzel and Puppo, EnricoMany important applications demand 3D content, yet 3D modeling is a notoriously difficult and inaccessible activity. This tutorial provides a crash course in one of the most promising approaches for democratizing 3D modeling: learning generative models of 3D structures. Such generative models typically describe a statistical distribution over a space of possible 3D shapes or 3D scenes, as well as a procedure for sampling new shapes or scenes from the distribution. To be useful by non-experts for design purposes, a generative model must represent 3D content at a high level of abstraction in which the user can express their goals-that is, it must be structure-aware. In this tutorial, we will take a deep dive into the most exciting methods for building generative models of both individual shapes as well as composite scenes, highlighting how standard data-driven methods need to be adapted, or new methods developed, to create models that are both generative and structure-aware. The tutorial assumes knowledge of the fundamentals of computer graphics, linear algebra, and probability, though a quick refresher of important algorithmic ideas from geometric analysis and machine learning is included. Attendees should come away from this tutorial with a broad understanding of the historical and current work in generative 3D modeling, as well as familiarity with the mathematical tools needed to start their own research or product development in this area.Item A Smart Palette for Helping Novice Painters to Mix Physical Watercolor Pigments(The Eurographics Association, 2018) Chen, Mei-Yun; Yang, Ci-Syuan; Ouhyoung, Ming; Jain, Eakta and Kosinka, JirĂFor novice painters, color mixing is a necessary skill which takes many years to learn. To get the skill easily, we design a system, a smart palette, to help them learn quickly. Our system is based on physical watercolor pigments, and we use a spectrometer to measure the transmittance and reflectance of watercolor pigments and collect a color mixing dataset. Moreover, we use deep neural network (DNN) to train a color mixing model. After that, using the model to predict a large amount of color mixing data creates a lookup table for color matching. In the smart palette, users can select a target color from an input image; then, the smart palette will find the nearest color, which is a matched color, and show a recipe where two pigments and their respective quantities can be mixed to get that color.Item UV Completion with Self-referenced Discrimination(The Eurographics Association, 2020) Kang, Jiwoo; Lee, Seongmin; Lee, Sanghoon; Wilkie, Alexander and Banterle, FrancescoA facial UV map is used in many applications such as facial reconstruction, synthesis, recognition, and editing. However, it is difficult to collect a number of the UVs needed for accuracy using 3D scan device, or a multi-view capturing system should be required to construct the UV. An occluded facial UV with holes could be obtained by sampling an image after fitting a 3D facial model by recent alignment methods. In this paper, we introduce a facial UV completion framework to train the deep neural network with a set of incomplete UV textures. By using the fact that the facial texture distributions of the left and right half-sides are almost equal, we devise an adversarial network to model the complete UV distribution of the facial texture. Also, we propose the self-referenced discrimination scheme that uses the facial UV completed from the generator for training real distribution. It is demonstrated that the network can be trained to complete the facial texture with incomplete UVs comparably to when utilizing the ground-truth UVs.