Search Results

Now showing 1 - 10 of 48
  • Item
    HairControl: A Tracking Solution for Directable Hair Simulation
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Milliez, Antoine; Sumner, Robert W.; Gross, Markus; Thomaszewski, Bernhard; Thuerey, Nils and Beeler, Thabo
    We present a method for adding artistic control to physics-based hair simulation. Taking as input an animation of a coarse set of guide hairs, we constrain a subsequent higher-resolution simulation of detail hairs to follow the input motion in a spatially-averaged sense. The resulting high-resolution motion adheres to the artistic intent, but is enhanced with detailed deformations and dynamics generated by physics-based simulation. The technical core of our approach is formed by a set of tracking constraints, requiring the center of mass of a given subset of detail hair to maintain its position relative to a reference point on the corresponding guide hair. As a crucial element of our formulation, we introduce the concept of dynamicallychanging constraint targets that allow reference points to slide along the guide hairs to provide sufficient flexibility for natural deformations. We furthermore propose to regularize the null space of the tracking constraints based on variance minimization, effectively controlling the amount of spread in the hair. We demonstrate the ability of our tracking solver to generate directable yet natural hair motion on a set of targeted experiments and show its application to production-level animations.
  • Item
    Stereo from Shading
    (The Eurographics Association, 2015) Chapiro, Alexandre; O'Sullivan, Carol; Jarosz, Wojciech; Gross, Markus; Smolic, Aljoscha; Jaakko Lehtinen and Derek Nowrouzezahrai
    We present a new method for creating and enhancing the stereoscopic 3D (S3D) sensation without using the parallax disparity between an image pair. S3D relies on a combination of cues to generate a feeling of depth, but only a few of these cues can easily be modified within a rendering pipeline without significantly changing the content. We explore one such cue-shading stereopsis-which to date has not been exploited for 3D rendering. By changing only the shading of objects between the left and right eye renders, we generate a noticeable increase in perceived depth. This effect can be used to create depth when applied to flat images, and to enhance depth when applied to shallow depth S3D images. Our method modifies the shading normals of objects or materials, such that it can be flexibly and selectively applied in complex scenes with arbitrary numbers and types of lights and indirect illumination. Our results show examples of rendered stills and video, as well as live action footage.
  • Item
    Example Based Repetitive Structure Synthesis
    (The Eurographics Association and John Wiley & Sons Ltd., 2015) Roveri, Riccardo; Ă–ztireli, A. Cengiz; Martin, Sebastian; Solenthaler, Barbara; Gross, Markus; Mirela Ben-Chen and Ligang Liu
    We present an example based geometry synthesis approach for generating general repetitive structures. Our model is based on a meshless representation, unifying and extending previous synthesis methods. Structures in the example and output are converted into a functional representation, where the functions are defined by point locations and attributes. We then formulate synthesis as a minimization problem where patches from the output function are matched to those of the example. As compared to existing repetitive structure synthesis methods, the new algorithm offers several advantages. It handles general discrete and continuous structures, and their mixtures in the same framework. The smooth formulation leads to employing robust optimization procedures in the algorithm. Equipped with an accurate patch similarity measure and dedicated sampling control, the algorithm preserves local structures accurately, regardless of the initial distribution of output points. It can also progressively synthesize output structures in given subspaces, allowing users to interactively control and guide the synthesis in real-time. We present various results for continuous/discrete structures and their mixtures, residing on curves, submanifolds, volumes, and general subspaces, some of which are generated interactively.
  • Item
    Deep Fluids: A Generative Network for Parameterized Fluid Simulations
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Kim, Byungsoo; Azevedo, Vinicius C.; Thuerey, Nils; Kim, Theodore; Gross, Markus; Solenthaler, Barbara; Alliez, Pierre and Pellacini, Fabio
    This paper presents a novel generative model to synthesize fluid simulations from a set of reduced parameters. A convolutional neural network is trained on a collection of discrete, parameterizable fluid simulation velocity fields. Due to the capability of deep learning architectures to learn representative features of the data, our generative model is able to accurately approximate the training data set, while providing plausible interpolated in-betweens. The proposed generative model is optimized for fluids by a novel loss function that guarantees divergence-free velocity fields at all times. In addition, we demonstrate that we can handle complex parameterizations in reduced spaces, and advance simulations in time by integrating in the latent space with a second network. Our method models a wide variety of fluid behaviors, thus enabling applications such as fast construction of simulations, interpolation of fluids with different parameters, time re-sampling, latent space simulations, and compression of fluid simulation data. Reconstructed velocity fields are generated up to 700x faster than re-simulating the data with the underlying CPU solver, while achieving compression rates of up to 1300x.
  • Item
    Differential Blending for Expressive Sketch-Based Posing
    (ACM SIGGRAPH / Eurographics Association, 2013) Ă–ztireli, A. Cengiz; Baran, Ilya; Popa, Tiberiu; Dalstein, Boris; Sumner, Robert W.; Gross, Markus; Theodore Kim and Robert Sumner
    Generating highly expressive and caricatured poses can be difficult in 3D computer animation because artists must interact with characters indirectly through complex character rigs. Furthermore, since caricatured poses often involve large bends and twists, artifacts arise with traditional skinning algorithms that are not designed to blend large, disparate rotations and cannot represent extremely large rotations. To overcome these problems, we introduce a differential blending algorithm that can successfully encode and blend large transformations, overcoming the inherent limitation of previous skeletal representations. Based on this blending method, we illustrate a sketch-based interface that supports curved bones and implements the line-of-action concept from hand-drawn animation to create expressive poses in 3D animation. By interpolating stored differential transformations across temporal keyframes, our system also generates caricatured animation. We present a detailed technical analysis of our differential blending algorithm and show several posing and animation results created using our system to demonstrate the utility of our method in practice.
  • Item
    Semantic Segmentation for Line Drawing Vectorization Using Neural Networks
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Kim, Byungsoo; Wang, Oliver; Ă–ztireli, A. Cengiz; Gross, Markus; Gutierrez, Diego and Sheffer, Alla
    In this work, we present a method to vectorize raster images of line art. Inverting the rasterization procedure is inherently ill-conditioned, as there exist many possible vector images that could yield the same raster image. However, not all of these vector images are equally useful to the user, especially if performing further edits is desired. We therefore define the problem of computing an instance segmentation of the most likely set of paths that could have created the raster image. Once the segmentation is computed, we use existing vectorization approaches to vectorize each path, and then combine all paths into the final output vector image. To determine which set of paths is most likely, we train a pair of neural networks to provide semantic clues that help resolve ambiguities at intersection and overlap regions. These predictions are made considering the full context of the image, and are then globally combined by solving a Markov Random Field (MRF). We demonstrate the flexibility of our method by generating results on character datasets, a synthetic random line dataset, and a dataset composed of human drawn sketches. For all cases, our system accurately recovers paths that adhere to the semantics of the drawings.
  • Item
    Optimizing Stereo-to-Multiview Conversion for Autostereoscopic Displays
    (The Eurographics Association and John Wiley and Sons Ltd., 2014) Chapiro, Alexandre; Heinzle, Simon; Aydin, Tunç Ozan; Poulakos, Steven; Zwicker, Matthias; Smolic, Aljosa; Gross, Markus; B. Levy and J. Kautz
    We present a novel stereo-to-multiview video conversion method for glasses-free multiview displays. Different from previous stereo-to-multiview approaches, our mapping algorithm utilizes the limited depth range of autostereoscopic displays optimally and strives to preserve the scene s artistic composition and perceived depth even under strong depth compression. We first present an investigation of how perceived image quality relates to spatial frequency and disparity. The outcome of this study is utilized in a two-step mapping algorithm, where we (i) compress the scene depth using a non-linear global function to the depth range of an autostereoscopic display, and (ii) enhance the depth gradients of salient objects to restore the perceived depth and salient scene structure. Finally, an adapted image domain warping algorithm is proposed to generate the multiview output, which enables overall disparity range extension.
  • Item
    FreeCam: A Hybrid Camera System for Interactive Free-Viewpoint Video
    (The Eurographics Association, 2011) Kuster, Claudia; Popa, Tiberiu; Zach, Christopher; Gotsman, Craig; Gross, Markus; Peter Eisert and Joachim Hornegger and Konrad Polthier
    We describe FreeCam - a system capable of generating live free-viewpoint video by simulating the output of a virtual camera moving through a dynamic scene. The FreeCam sensing hardware consists of a small number of static color video cameras and state-of-the-art Kinect depth sensors, and the FreeCam software uses a number of advanced GPU processing and rendering techniques to seamlessly merge the input streams, providing a pleasant user experience. A system such as FreeCam is critical for applications such as telepresence, 3D video-conferencing and interactive 3D TV. FreeCam may also be used to produce multi-view video, which is critical to drive newgeneration autostereoscopic lenticular 3D displays.
  • Item
    Spatio-Temporal Geometry Fusion for Multiple Hybrid Cameras using Moving Least Squares Surfaces
    (The Eurographics Association and John Wiley and Sons Ltd., 2014) Kuster, Claudia; Bazin, Jean-Charles; Ă–ztireli, Cengiz; Deng, Teng; Martin, Tobias; Popa, Tiberiu; Gross, Markus; B. Levy and J. Kautz
    Multi-view reconstruction aims at computing the geometry of a scene observed by a set of cameras. Accurate 3D reconstruction of dynamic scenes is a key component for a large variety of applications, ranging from special effects to telepresence and medical imaging. In this paper we propose a method based on Moving Least Squares surfaces which robustly and efficiently reconstructs dynamic scenes captured by a calibrated set of hybrid color+depth cameras. Our reconstruction provides spatio-temporal consistency and seamlessly fuses color and geometric information. We illustrate our approach on a variety of real sequences and demonstrate that it favorably compares to state-of-the-art methods.
  • Item
    Efficient Simulation of Example-Based Materials
    (The Eurographics Association, 2012) Schumacher, Christian; Thomaszewski, Bernhard; Coros, Stelian; Martin, Sebastian; Sumner, Robert; Gross, Markus; Jehee Lee and Paul Kry
    We present a new method for efficiently simulating art-directable deformable materials. We use example poses to define subspaces of desirable deformations via linear interpolation. As a central aspect of our approach, we use an incompatible representation for input and interpolated poses that allows us to interpolate between elements individually. This enables us to bypass costly reconstruction steps and we thus achieve significant performance improvements compared to previous work. As a natural continuation, we furthermore present a formulation of example-based plasticity. Finally, we extend the directability of example-based materials and explore a number of powerful control mechanisms. We demonstrate these novel concepts on a number of solid and shell animations including artistic deformation behaviors, cartoon physics, and example-based pose space dynamics.