Search Results

Now showing 1 - 9 of 9
  • Item
    2017 Cover Image: Mixing Bowl
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Marra, Alessia; Nitti, Maurizio; Papas, Marios; Müller, Thomas; Gross, Markus; Jarosz, Wojciech; ovák, Jan; Chen, Min and Zhang, Hao (Richard)
  • Item
    Flow-Induced Inertial Steady Vector Field Topology
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Günther, Tobias; Gross, Markus; Loic Barthe and Bedrich Benes
    Traditionally, vector field visualization is concerned with 2D and 3D flows. Yet, many concepts can be extended to general dynamical systems, including the higher-dimensional problem of modeling the motion of finite-sized objects in fluids. In the steady case, the trajectories of these so-called inertial particles appear as tangent curves of a 4D or 6D vector field. These higher-dimensional flows are difficult to map to lower-dimensional spaces, which makes their visualization a challenging problem. We focus on vector field topology, which allows scientists to study asymptotic particle behavior. As recent work on the 2D case has shown, both extraction and classification of isolated critical points depend on the underlying particle model. In this paper, we aim for a model-independent classification technique, which we apply to two different particle models in not only 2D, but also 3D cases. We show that the classification can be done by performing an eigenanalysis of the spatial derivatives' velocity subspace of the higher-dimensional 4D or 6D flow. We construct glyphs that depict not only the types of critical points, but also encode the directional information given by the eigenvectors. We show that the eigenvalues and eigenvectors of the inertial phase space have sufficient symmetries and structure so that they can be depicted in 2D or 3D, instead of 4D or 6D.
  • Item
    Decoupled Opacity Optimization for Points, Lines and Surfaces
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Günther, Tobias; Theisel, Holger; Gross, Markus; Loic Barthe and Bedrich Benes
    Displaying geometry in flow visualization is often accompanied by occlusion problems, making it difficult to perceive information that is relevant in the respective application. In a recent technique, named opacity optimization, the balance of occlusion avoidance and the selection of meaningful geometry was recognized to be a view-dependent, global optimization problem. The method solves a bounded-variable least-squares problem, which minimizes energy terms for the reduction of occlusion, background clutter, adding smoothness and regularization. The original technique operates on an object-space discretization and was shown for line and surface geometry. Recently, it has been extended to volumes, where it was solved locally per ray by dropping the smoothness energy term and replacing it by pre-filtering the importance measure. In this paper, we pick up the idea of splitting the opacity optimization problem into two smaller problems. The first problem is a minimization with analytic solution, and the second problem is a smoothing of the obtained minimizer in object-space. Thereby, the minimization problem can be solved locally per pixel, making it possible to combine all geometry types (points, lines and surfaces) consistently in a single optimization framework. We call this decoupled opacity optimization and apply it to a number of steady 3D vector fields.
  • Item
    Designing Cable-Driven Actuation Networks for Kinematic Chains and Trees
    (ACM, 2017) Megaro, Vittorio; Knoop, Espen; Spielberg, Andrew; Levin, David I.W.; Matusik, Wojciech; Gross, Markus; Thomaszewski, Bernhard; Bächer, Moritz; Bernhard Thomaszewski and KangKang Yin and Rahul Narain
    In this paper we present an optimization-based approach for the design of cable-driven kinematic chains and trees. Our system takes as input a hierarchical assembly consisting of rigid links jointed together with hinges. The user also specifies a set of target poses or keyframes using inverse kinematics. Our approach places torsional springs at the joints and computes a cable network that allows us to reproduce the specified target poses. We start with a large set of cables that have randomly chosen routing points and we gradually remove the redundancy. Then we refine the routing points taking into account the path between poses or keyframes in order to further reduce the number of cables and minimize required control forces. We propose a reduced coordinate formulation that links control forces to joint angles and routing points, enabling the co-optimization of a cable network together with the required actuation forces. We demonstrate the efficacy of our technique by designing and fabricating a cable-driven, animated character, an animatronic hand, and a specialized gripper.
  • Item
    Practical Path Guiding for Efficient Light-transport Simulation
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Müller, Thomas; Gross, Markus; Novák, Jan; Zwicker, Matthias and Sander, Pedro
    We present a robust, unbiased technique for intelligent light-path construction in path-tracing algorithms. Inspired by existing path-guiding algorithms, our method learns an approximate representation of the scene's spatio-directional radiance field in an unbiased and iterative manner. To that end, we propose an adaptive spatio-directional hybrid data structure, referred to as SD-tree, for storing and sampling incident radiance. The SD-tree consists of an upper part-a binary tree that partitions the 3D spatial domain of the light field-and a lower part-a quadtree that partitions the 2D directional domain. We further present a principled way to automatically budget training and rendering computations to minimize the variance of the final image. Our method does not require tuning hyperparameters, although we allow limiting the memory footprint of the SD-tree. The aforementioned properties, its ease of implementation, and its stable performance make our method compatible with production environments. We demonstrate the merits of our method on scenes with difficult visibility, detailed geometry, and complex specular-glossy light transport, achieving better performance than previous state-of-the-art algorithms.
  • Item
    Enriching Facial Blendshape Rigs with Physical Simulation
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Kozlov, Yeara; Bradley, Derek; Bächer, Moritz; Thomaszewski, Bernhard; Beeler, Thabo; Gross, Markus; Loic Barthe and Bedrich Benes
    Oftentimes facial animation is created separately from overall body motion. Since convincing facial animation is challenging enough in itself, artists tend to create and edit the face motion in isolation. Or if the face animation is derived from motion capture, this is typically performed in a mo-cap booth while sitting relatively still. In either case, recombining the isolated face animation with body and head motion is non-trivial and often results in an uncanny result if the body dynamics are not properly reflected on the face (e.g. the bouncing of facial tissue when running). We tackle this problem by introducing a simple and intuitive system that allows to add physics to facial blendshape animation. Unlike previous methods that try to add physics to face rigs, our method preserves the original facial animation as closely as possible. To this end, we present a novel simulation framework that uses the original animation as per-frame rest-poses without adding spurious forces. As a result, in the absence of any external forces or rigid head motion, the facial performance will exactly match the artist-created blendshape animation. In addition we propose the concept of blendmaterials to give artists an intuitive means to account for changing material properties due to muscle activation. This system allows to automatically combine facial animation and head motion such that they are consistent, while preserving the original animation as closely as possible. The system is easy to use and readily integrates with existing animation pipelines.
  • Item
    General Point Sampling with Adaptive Density and Correlations
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Roveri, Riccardo; Öztireli, A. Cengiz; Gross, Markus; Loic Barthe and Bedrich Benes
    Analyzing and generating sampling patterns are fundamental problems for many applications in computer graphics. Ideally, point patterns should conform to the problem at hand with spatially adaptive density and correlations. Although there exist excellent algorithms that can generate point distributions with spatially adaptive density or anisotropy, the pair-wise correlation model, blue noise being the most common, is assumed to be constant throughout the space. Analogously, by relying on possibly modulated pair-wise difference vectors, the analysis methods are designed to study only such spatially constant correlations. In this paper, we present the first techniques to analyze and synthesize point patterns with adaptive density and correlations. This provides a comprehensive framework for understanding and utilizing general point sampling. Starting from fundamental measures from stochastic point processes, we propose an analysis framework for general distributions, and a novel synthesis algorithm that can generate point distributions with spatio-temporally adaptive density and correlations based on a locally stationary point process model. Our techniques also extend to general metric spaces. We illustrate the utility of the new techniques on the analysis and synthesis of real-world distributions, image reconstruction, spatio-temporal stippling, and geometry sampling.
  • Item
    DeepGarment: 3D Garment Shape Estimation from a Single Image
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Danerek, Radek; Dibra, Endri; Öztireli, A. Cengiz; Ziegler, Remo; Gross, Markus; Loic Barthe and Bedrich Benes
    3D garment capture is an important component for various applications such as free-view point video, virtual avatars, online shopping, and virtual cloth fitting. Due to the complexity of the deformations, capturing 3D garment shapes requires controlled and specialized setups. A viable alternative is image-based garment capture. Capturing 3D garment shapes from a single image, however, is a challenging problem and the current solutions come with assumptions on the lighting, camera calibration, complexity of human or mannequin poses considered, and more importantly a stable physical state for the garment and the underlying human body. In addition, most of the works require manual interaction and exhibit high run-times. We propose a new technique that overcomes these limitations, making garment shape estimation from an image a practical approach for dynamic garment capture. Starting from synthetic garment shape data generated through physically based simulations from various human bodies in complex poses obtained through Mocap sequences, and rendered under varying camera positions and lighting conditions, our novel method learns a mapping from rendered garment images to the underlying 3D garment model. This is achieved by training Convolutional Neural Networks (CNN-s) to estimate 3D vertex displacements from a template mesh with a specialized loss function. We illustrate that this technique is able to recover the global shape of dynamic 3D garments from a single image under varying factors such as challenging human poses, self occlusions, various camera poses and lighting conditions, at interactive rates. Improvement is shown if more than one view is integrated. Additionally, we show applications of our method to videos.
  • Item
    Example-Based Brushes for Coherent Stylized Renderings
    (Association for Computing Machinery, Inc (ACM), 2017) Zheng, Ming; Milliez, Antoine; Gross, Markus; Sumner, Robert W.; Holger Winnemoeller and Lyn Bartram
    Painterly stylization is the cornerstone of non-photorealistic render- ing. Inspired by the versatility of paint as a physical medium, exist- ing methods target intuitive interfaces that mimic physical brushes, providing artists the ability to intuitively place paint strokes in a digital scene. Other work focuses on physical simulation of the interaction between paint and paper or realistic rendering of wet and dry paint. In our work, we leverage the versatility of example- based methods that can generate paint strokes of arbitrary shape and style based on a collection of images acquired from physical media. Such ideas have gained popularity since they do not require cumbersome physical simulation and achieve high fidelity without the need of a specific model or rule set. However, existing methods are limited to the generation of static 2D paintings and cannot be applied in the context of 3D painting and animation where paint strokes change shape and length as the camera viewport moves. Our method targets this shortcoming by generating temporally- coherent example-based paint strokes that accommodate to such length and shape changes. We demonstrate the robustness of our method with a 2D painting application that provides immediate feedback to the user and show how our brush model can be ap- plied to the screen-space rendering of 3D paintings on a variety of examples.