Search Results

Now showing 1 - 10 of 39
  • Item
    HairControl: A Tracking Solution for Directable Hair Simulation
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Milliez, Antoine; Sumner, Robert W.; Gross, Markus; Thomaszewski, Bernhard; Thuerey, Nils and Beeler, Thabo
    We present a method for adding artistic control to physics-based hair simulation. Taking as input an animation of a coarse set of guide hairs, we constrain a subsequent higher-resolution simulation of detail hairs to follow the input motion in a spatially-averaged sense. The resulting high-resolution motion adheres to the artistic intent, but is enhanced with detailed deformations and dynamics generated by physics-based simulation. The technical core of our approach is formed by a set of tracking constraints, requiring the center of mass of a given subset of detail hair to maintain its position relative to a reference point on the corresponding guide hair. As a crucial element of our formulation, we introduce the concept of dynamicallychanging constraint targets that allow reference points to slide along the guide hairs to provide sufficient flexibility for natural deformations. We furthermore propose to regularize the null space of the tracking constraints based on variance minimization, effectively controlling the amount of spread in the hair. We demonstrate the ability of our tracking solver to generate directable yet natural hair motion on a set of targeted experiments and show its application to production-level animations.
  • Item
    Deep Compositional Denoising for High-quality Monte Carlo Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Zhang, Xianyao; Manzi, Marco; Vogels, Thijs; Dahlberg, Henrik; Gross, Markus; Papas, Marios; Bousseau, Adrien and McGuire, Morgan
    We propose a deep-learning method for automatically decomposing noisy Monte Carlo renderings into components that kernelpredicting denoisers can denoise more effectively. In our model, a neural decomposition module learns to predict noisy components and corresponding feature maps, which are consecutively reconstructed by a denoising module. The components are predicted based on statistics aggregated at the pixel level by the renderer. Denoising these components individually allows the use of per-component kernels that adapt to each component's noisy signal characteristics. Experimentally, we show that the proposed decomposition module consistently improves the denoising quality of current state-of-the-art kernel-predicting denoisers on large-scale academic and production datasets.
  • Item
    Glyph-Based Visualization of Affective States
    (The Eurographics Association, 2020) Kovacevic, Nikola; Wampfler, Rafael; Solenthaler, Barbara; Gross, Markus; Günther, Tobias; Kerren, Andreas and Garth, Christoph and Marai, G. Elisabeta
    Decades of research in psychology on the formal measurement of emotions led to the concept of affective states. Visualizing the measured affective state can be useful in education, as it allows teachers to adapt lessons based on the affective state of students. In the entertainment industry, game mechanics can be adapted based on the boredom and frustration levels of a player. Visualizing the affective state can also increase emotional self-awareness of the user whose state is being measured, which can have an impact on well-being. However, graphical user interfaces seldom visualize the user's affective state, but rather focus on the purely objective interaction between the system and the user. This paper proposes two graphical user interface widgets that visualize the user's affective state, ensuring a compact and unobtrusive visualization. In a user study with 644 participants, the widgets were evaluated in relation to a baseline widget and were tested on intuitiveness and understandability. Particularly in terms of understandability, the baseline was outperformed by our two widgets.
  • Item
    Deep Fluids: A Generative Network for Parameterized Fluid Simulations
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Kim, Byungsoo; Azevedo, Vinicius C.; Thuerey, Nils; Kim, Theodore; Gross, Markus; Solenthaler, Barbara; Alliez, Pierre and Pellacini, Fabio
    This paper presents a novel generative model to synthesize fluid simulations from a set of reduced parameters. A convolutional neural network is trained on a collection of discrete, parameterizable fluid simulation velocity fields. Due to the capability of deep learning architectures to learn representative features of the data, our generative model is able to accurately approximate the training data set, while providing plausible interpolated in-betweens. The proposed generative model is optimized for fluids by a novel loss function that guarantees divergence-free velocity fields at all times. In addition, we demonstrate that we can handle complex parameterizations in reduced spaces, and advance simulations in time by integrating in the latent space with a second network. Our method models a wide variety of fluid behaviors, thus enabling applications such as fast construction of simulations, interpolation of fluids with different parameters, time re-sampling, latent space simulations, and compression of fluid simulation data. Reconstructed velocity fields are generated up to 700x faster than re-simulating the data with the underlying CPU solver, while achieving compression rates of up to 1300x.
  • Item
    Semantic Segmentation for Line Drawing Vectorization Using Neural Networks
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Kim, Byungsoo; Wang, Oliver; Öztireli, A. Cengiz; Gross, Markus; Gutierrez, Diego and Sheffer, Alla
    In this work, we present a method to vectorize raster images of line art. Inverting the rasterization procedure is inherently ill-conditioned, as there exist many possible vector images that could yield the same raster image. However, not all of these vector images are equally useful to the user, especially if performing further edits is desired. We therefore define the problem of computing an instance segmentation of the most likely set of paths that could have created the raster image. Once the segmentation is computed, we use existing vectorization approaches to vectorize each path, and then combine all paths into the final output vector image. To determine which set of paths is most likely, we train a pair of neural networks to provide semantic clues that help resolve ambiguities at intersection and overlap regions. These predictions are made considering the full context of the image, and are then globally combined by solving a Markov Random Field (MRF). We demonstrate the flexibility of our method by generating results on character datasets, a synthetic random line dataset, and a dataset composed of human drawn sketches. For all cases, our system accurately recovers paths that adhere to the semantics of the drawings.
  • Item
    Neural Denoising for Deep-Z Monte Carlo Renderings
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhang, Xianyao; Röthlin, Gerhard; Zhu, Shilin; Aydin, Tunç Ozan; Salehi, Farnood; Gross, Markus; Papas, Marios; Bermano, Amit H.; Kalogerakis, Evangelos
    We present a kernel-predicting neural denoising method for path-traced deep-Z images that facilitates their usage in animation and visual effects production. Deep-Z images provide enhanced flexibility during compositing as they contain color, opacity, and other rendered data at multiple depth-resolved bins within each pixel. However, they are subject to noise, and rendering until convergence is prohibitively expensive. The current state of the art in deep-Z denoising yields objectionable artifacts, and current neural denoising methods are incapable of handling the variable number of depth bins in deep-Z images. Our method extends kernel-predicting convolutional neural networks to address the challenges stemming from denoising deep-Z images. We propose a hybrid reconstruction architecture that combines the depth-resolved reconstruction at each bin with the flattened reconstruction at the pixel level. Moreover, we propose depth-aware neighbor indexing of the depth-resolved inputs to the convolution and denoising kernel application operators, which reduces artifacts caused by depth misalignment present in deep-Z images. We evaluate our method on a production-quality deep-Z dataset, demonstrating significant improvements in denoising quality and performance compared to the current state-of-the-art deep-Z denoiser. By addressing the significant challenge of the cost associated with rendering path-traced deep-Z images, we believe that our approach will pave the way for broader adoption of deep-Z workflows in future productions.
  • Item
    Learning Dynamic 3D Geometry and Texture for Video Face Swapping
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Otto, Christopher; Naruniec, Jacek; Helminger, Leonhard; Etterlin, Thomas; Mignone, Graziana; Chandran, Prashanth; Zoss, Gaspard; Schroers, Christopher; Gross, Markus; Gotardo, Paulo; Bradley, Derek; Weber, Romann; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Face swapping is the process of applying a source actor's appearance to a target actor's performance in a video. This is a challenging visual effect that has seen increasing demand in film and television production. Recent work has shown that datadriven methods based on deep learning can produce compelling effects at production quality in a fraction of the time required for a traditional 3D pipeline. However, the dominant approach operates only on 2D imagery without reference to the underlying facial geometry or texture, resulting in poor generalization under novel viewpoints and little artistic control. Methods that do incorporate geometry rely on pre-learned facial priors that do not adapt well to particular geometric features of the source and target faces. We approach the problem of face swapping from the perspective of learning simultaneous convolutional facial autoencoders for the source and target identities, using a shared encoder network with identity-specific decoders. The key novelty in our approach is that each decoder first lifts the latent code into a 3D representation, comprising a dynamic face texture and a deformable 3D face shape, before projecting this 3D face back onto the input image using a differentiable renderer. The coupled autoencoders are trained only on videos of the source and target identities, without requiring 3D supervision. By leveraging the learned 3D geometry and texture, our method achieves face swapping with higher quality than when using offthe- shelf monocular 3D face reconstruction, and overall lower FID score than state-of-the-art 2D methods. Furthermore, our 3D representation allows for efficient artistic control over the result, which can be hard to achieve with existing 2D approaches.
  • Item
    Interactive Sculpting of Digital Faces Using an Anatomical Modeling Paradigm
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Gruber, Aurel; Fratarcangeli, Marco; Zoss, Gaspard; Cattaneo, Roman; Beeler, Thabo; Gross, Markus; Bradley, Derek; Jacobson, Alec and Huang, Qixing
    Digitally sculpting 3D human faces is a very challenging task. It typically requires either 1) highly-skilled artists using complex software packages for high quality results, or 2) highly-constrained simple interfaces for consumer-level avatar creation, such as in game engines. We propose a novel interactive method for the creation of digital faces that is simple and intuitive to use, even for novice users, while consistently producing plausible 3D face geometry, and allowing editing freedom beyond traditional video game avatar creation. At the core of our system lies a specialized anatomical local face model (ALM), which is constructed from a dataset of several hundred 3D face scans. User edits are propagated to constraints for an optimization of our data-driven ALM model, ensuring the resulting face remains plausible even for simple edits like clicking and dragging surface points. We show how several natural interaction methods can be implemented in our framework, including direct control of the surface, indirect control of semantic features like age, ethnicity, gender, and BMI, as well as indirect control through manipulating the underlying bony structures. The result is a simple new method for creating digital human faces, for artists and novice users alike. Our method is attractive for low-budget VFX and animation productions, and our anatomical modeling paradigm can complement traditional game engine avatar design packages.
  • Item
    2017 Cover Image: Mixing Bowl
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Marra, Alessia; Nitti, Maurizio; Papas, Marios; Müller, Thomas; Gross, Markus; Jarosz, Wojciech; ovák, Jan; Chen, Min and Zhang, Hao (Richard)
  • Item
    Programmable Animation Texturing using Motion Stamps
    (The Eurographics Association and John Wiley & Sons Ltd., 2016) Milliez, Antoine; Guay, Martin; Cani, Marie-Paule; Gross, Markus; Sumner, Robert W.; Eitan Grinspun and Bernd Bickel and Yoshinori Dobashi
    Our work on programmable animation texturing enhances the concept of texture mapping by letting artists stylize arbitrary animations using elementary animations, instantiated at the scale of their choice. The core of our workflow resides in two components: we first impose structure and temporal coherence over the animation data using a novel radius-based animationaware clustering. The computed clusters conform to the user-specified scale, and follow the underlying animation regardless of its topology. Extreme mesh deformations, complex particle simulations, or simulated mesh animations with ever-changing topology can therefore be handled in a temporally coherent way. Then, in analogy to fragment shaders that specify an output color based on a texture and a collection of properties defined per vertex (position, texture coordinate, etc.), we provide a programmable interface to the user, letting him or her specify an output animation based on the collection of properties we extract per cluster (position, velocity, etc.). We equip elementary animations with a collection of parameters that are exposed in our programmable system and enables users to script the animated textures depending on properties of the input cluster. We demonstrate the power of our system with complex animated textures created with minimal user input.