5 results
Search Results
Now showing 1 - 5 of 5
Item Frequency-Aware Reconstruction of Fluid Simulations with Generative Networks(The Eurographics Association, 2020) Biland, Simon; Azevedo, Vinicius C.; Kim, Byungsoo; Solenthaler, Barbara; Wilkie, Alexander and Banterle, FrancescoConvolutional neural networks were recently employed to fully reconstruct fluid simulation data from a set of reduced parameters. However, since (de-)convolutions traditionally trained with supervised l1-loss functions do not discriminate between low and high frequencies in the data, the error is not minimized efficiently for higher bands. This directly correlates with the quality of the perceived results, since missing high frequency details are easily noticeable. In this paper, we analyze the reconstruction quality of generative networks and present a frequency-aware loss function that is able to focus on specific bands of the dataset during training time. We show that our approach improves reconstruction quality of fluid simulation data in mid-frequency bands, yielding perceptually better results while requiring comparable training time.Item Neural Smoke Stylization with Color Transfer(The Eurographics Association, 2020) Christen, Fabienne; Kim, Byungsoo; Azevedo, Vinicius C.; Solenthaler, Barbara; Wilkie, Alexander and Banterle, FrancescoArtistically controlling fluid simulations requires a large amount of manual work by an artist. The recently presented transportbased neural style transfer approach simplifies workflows as it transfers the style of arbitrary input images onto 3D smoke simulations. However, the method only modifies the shape of the fluid but omits color information. In this work, we therefore extend the previous approach to obtain a complete pipeline for transferring shape and color information onto 2D and 3D smoke simulations with neural networks. Our results demonstrate that our method successfully transfers colored style features consistently in space and time to smoke data for different input textures.Item Adversarial Generation of Continuous Implicit Shape Representations(The Eurographics Association, 2020) Kleineberg, Marian; Fey, Matthias; Weichert, Frank; Wilkie, Alexander and Banterle, FrancescoThis work presents a generative adversarial architecture for generating three-dimensional shapes based on signed distance representations. While the deep generation of shapes has been mostly tackled by voxel and surface point cloud approaches, our generator learns to approximate the signed distance for any point in space given prior latent information. Although structurally similar to generative point cloud approaches, this formulation can be evaluated with arbitrary point density during inference, leading to fine-grained details in generated outputs. Furthermore, we study the effects of using either progressively growing voxel- or point-processing networks as discriminators, and propose a refinement scheme to strengthen the generator's capabilities in modeling the zero iso-surface decision boundary of shapes. We train our approach on the SHAPENET benchmark dataset and validate, both quantitatively and qualitatively, its performance in generating realistic 3D shapes.Item Learning Generative Models of 3D Structures(The Eurographics Association and John Wiley & Sons Ltd., 2020) Chaudhuri, Siddhartha; Ritchie, Daniel; Wu, Jiajun; Xu, Kai; Zhang, Hao; Mantiuk, Rafal and Sundstedt, Veronica3D models of objects and scenes are critical to many academic disciplines and industrial applications. Of particular interest is the emerging opportunity for 3D graphics to serve artificial intelligence: computer vision systems can benefit from syntheticallygenerated training data rendered from virtual 3D scenes, and robots can be trained to navigate in and interact with real-world environments by first acquiring skills in simulated ones. One of the most promising ways to achieve this is by learning and applying generative models of 3D content: computer programs that can synthesize new 3D shapes and scenes. To allow users to edit and manipulate the synthesized 3D content to achieve their goals, the generative model should also be structure-aware: it should express 3D shapes and scenes using abstractions that allow manipulation of their high-level structure. This state-of-theart report surveys historical work and recent progress on learning structure-aware generative models of 3D shapes and scenes. We present fundamental representations of 3D shape and scene geometry and structures, describe prominent methodologies including probabilistic models, deep generative models, program synthesis, and neural networks for structured data, and cover many recent methods for structure-aware synthesis of 3D shapes and indoor scenes.Item UV Completion with Self-referenced Discrimination(The Eurographics Association, 2020) Kang, Jiwoo; Lee, Seongmin; Lee, Sanghoon; Wilkie, Alexander and Banterle, FrancescoA facial UV map is used in many applications such as facial reconstruction, synthesis, recognition, and editing. However, it is difficult to collect a number of the UVs needed for accuracy using 3D scan device, or a multi-view capturing system should be required to construct the UV. An occluded facial UV with holes could be obtained by sampling an image after fitting a 3D facial model by recent alignment methods. In this paper, we introduce a facial UV completion framework to train the deep neural network with a set of incomplete UV textures. By using the fact that the facial texture distributions of the left and right half-sides are almost equal, we devise an adversarial network to model the complete UV distribution of the facial texture. Also, we propose the self-referenced discrimination scheme that uses the facial UV completed from the generator for training real distribution. It is demonstrated that the network can be trained to complete the facial texture with incomplete UVs comparably to when utilizing the ground-truth UVs.