65 results
Search Results
Now showing 1 - 10 of 65
Item HairControl: A Tracking Solution for Directable Hair Simulation(The Eurographics Association and John Wiley & Sons Ltd., 2018) Milliez, Antoine; Sumner, Robert W.; Gross, Markus; Thomaszewski, Bernhard; Thuerey, Nils and Beeler, ThaboWe present a method for adding artistic control to physics-based hair simulation. Taking as input an animation of a coarse set of guide hairs, we constrain a subsequent higher-resolution simulation of detail hairs to follow the input motion in a spatially-averaged sense. The resulting high-resolution motion adheres to the artistic intent, but is enhanced with detailed deformations and dynamics generated by physics-based simulation. The technical core of our approach is formed by a set of tracking constraints, requiring the center of mass of a given subset of detail hair to maintain its position relative to a reference point on the corresponding guide hair. As a crucial element of our formulation, we introduce the concept of dynamicallychanging constraint targets that allow reference points to slide along the guide hairs to provide sufficient flexibility for natural deformations. We furthermore propose to regularize the null space of the tracking constraints based on variance minimization, effectively controlling the amount of spread in the hair. We demonstrate the ability of our tracking solver to generate directable yet natural hair motion on a set of targeted experiments and show its application to production-level animations.Item Stereo from Shading(The Eurographics Association, 2015) Chapiro, Alexandre; O'Sullivan, Carol; Jarosz, Wojciech; Gross, Markus; Smolic, Aljoscha; Jaakko Lehtinen and Derek NowrouzezahraiWe present a new method for creating and enhancing the stereoscopic 3D (S3D) sensation without using the parallax disparity between an image pair. S3D relies on a combination of cues to generate a feeling of depth, but only a few of these cues can easily be modified within a rendering pipeline without significantly changing the content. We explore one such cue-shading stereopsis-which to date has not been exploited for 3D rendering. By changing only the shading of objects between the left and right eye renders, we generate a noticeable increase in perceived depth. This effect can be used to create depth when applied to flat images, and to enhance depth when applied to shallow depth S3D images. Our method modifies the shading normals of objects or materials, such that it can be flexibly and selectively applied in complex scenes with arbitrary numbers and types of lights and indirect illumination. Our results show examples of rendered stills and video, as well as live action footage.Item Example Based Repetitive Structure Synthesis(The Eurographics Association and John Wiley & Sons Ltd., 2015) Roveri, Riccardo; Öztireli, A. Cengiz; Martin, Sebastian; Solenthaler, Barbara; Gross, Markus; Mirela Ben-Chen and Ligang LiuWe present an example based geometry synthesis approach for generating general repetitive structures. Our model is based on a meshless representation, unifying and extending previous synthesis methods. Structures in the example and output are converted into a functional representation, where the functions are defined by point locations and attributes. We then formulate synthesis as a minimization problem where patches from the output function are matched to those of the example. As compared to existing repetitive structure synthesis methods, the new algorithm offers several advantages. It handles general discrete and continuous structures, and their mixtures in the same framework. The smooth formulation leads to employing robust optimization procedures in the algorithm. Equipped with an accurate patch similarity measure and dedicated sampling control, the algorithm preserves local structures accurately, regardless of the initial distribution of output points. It can also progressively synthesize output structures in given subspaces, allowing users to interactively control and guide the synthesis in real-time. We present various results for continuous/discrete structures and their mixtures, residing on curves, submanifolds, volumes, and general subspaces, some of which are generated interactively.Item Deep Compositional Denoising for High-quality Monte Carlo Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2021) Zhang, Xianyao; Manzi, Marco; Vogels, Thijs; Dahlberg, Henrik; Gross, Markus; Papas, Marios; Bousseau, Adrien and McGuire, MorganWe propose a deep-learning method for automatically decomposing noisy Monte Carlo renderings into components that kernelpredicting denoisers can denoise more effectively. In our model, a neural decomposition module learns to predict noisy components and corresponding feature maps, which are consecutively reconstructed by a denoising module. The components are predicted based on statistics aggregated at the pixel level by the renderer. Denoising these components individually allows the use of per-component kernels that adapt to each component's noisy signal characteristics. Experimentally, we show that the proposed decomposition module consistently improves the denoising quality of current state-of-the-art kernel-predicting denoisers on large-scale academic and production datasets.Item Glyph-Based Visualization of Affective States(The Eurographics Association, 2020) Kovacevic, Nikola; Wampfler, Rafael; Solenthaler, Barbara; Gross, Markus; Günther, Tobias; Kerren, Andreas and Garth, Christoph and Marai, G. ElisabetaDecades of research in psychology on the formal measurement of emotions led to the concept of affective states. Visualizing the measured affective state can be useful in education, as it allows teachers to adapt lessons based on the affective state of students. In the entertainment industry, game mechanics can be adapted based on the boredom and frustration levels of a player. Visualizing the affective state can also increase emotional self-awareness of the user whose state is being measured, which can have an impact on well-being. However, graphical user interfaces seldom visualize the user's affective state, but rather focus on the purely objective interaction between the system and the user. This paper proposes two graphical user interface widgets that visualize the user's affective state, ensuring a compact and unobtrusive visualization. In a user study with 644 participants, the widgets were evaluated in relation to a baseline widget and were tested on intuitiveness and understandability. Particularly in terms of understandability, the baseline was outperformed by our two widgets.Item Deep Fluids: A Generative Network for Parameterized Fluid Simulations(The Eurographics Association and John Wiley & Sons Ltd., 2019) Kim, Byungsoo; Azevedo, Vinicius C.; Thuerey, Nils; Kim, Theodore; Gross, Markus; Solenthaler, Barbara; Alliez, Pierre and Pellacini, FabioThis paper presents a novel generative model to synthesize fluid simulations from a set of reduced parameters. A convolutional neural network is trained on a collection of discrete, parameterizable fluid simulation velocity fields. Due to the capability of deep learning architectures to learn representative features of the data, our generative model is able to accurately approximate the training data set, while providing plausible interpolated in-betweens. The proposed generative model is optimized for fluids by a novel loss function that guarantees divergence-free velocity fields at all times. In addition, we demonstrate that we can handle complex parameterizations in reduced spaces, and advance simulations in time by integrating in the latent space with a second network. Our method models a wide variety of fluid behaviors, thus enabling applications such as fast construction of simulations, interpolation of fluids with different parameters, time re-sampling, latent space simulations, and compression of fluid simulation data. Reconstructed velocity fields are generated up to 700x faster than re-simulating the data with the underlying CPU solver, while achieving compression rates of up to 1300x.Item Differential Blending for Expressive Sketch-Based Posing(ACM SIGGRAPH / Eurographics Association, 2013) Öztireli, A. Cengiz; Baran, Ilya; Popa, Tiberiu; Dalstein, Boris; Sumner, Robert W.; Gross, Markus; Theodore Kim and Robert SumnerGenerating highly expressive and caricatured poses can be difficult in 3D computer animation because artists must interact with characters indirectly through complex character rigs. Furthermore, since caricatured poses often involve large bends and twists, artifacts arise with traditional skinning algorithms that are not designed to blend large, disparate rotations and cannot represent extremely large rotations. To overcome these problems, we introduce a differential blending algorithm that can successfully encode and blend large transformations, overcoming the inherent limitation of previous skeletal representations. Based on this blending method, we illustrate a sketch-based interface that supports curved bones and implements the line-of-action concept from hand-drawn animation to create expressive poses in 3D animation. By interpolating stored differential transformations across temporal keyframes, our system also generates caricatured animation. We present a detailed technical analysis of our differential blending algorithm and show several posing and animation results created using our system to demonstrate the utility of our method in practice.Item Semantic Segmentation for Line Drawing Vectorization Using Neural Networks(The Eurographics Association and John Wiley & Sons Ltd., 2018) Kim, Byungsoo; Wang, Oliver; Öztireli, A. Cengiz; Gross, Markus; Gutierrez, Diego and Sheffer, AllaIn this work, we present a method to vectorize raster images of line art. Inverting the rasterization procedure is inherently ill-conditioned, as there exist many possible vector images that could yield the same raster image. However, not all of these vector images are equally useful to the user, especially if performing further edits is desired. We therefore define the problem of computing an instance segmentation of the most likely set of paths that could have created the raster image. Once the segmentation is computed, we use existing vectorization approaches to vectorize each path, and then combine all paths into the final output vector image. To determine which set of paths is most likely, we train a pair of neural networks to provide semantic clues that help resolve ambiguities at intersection and overlap regions. These predictions are made considering the full context of the image, and are then globally combined by solving a Markov Random Field (MRF). We demonstrate the flexibility of our method by generating results on character datasets, a synthetic random line dataset, and a dataset composed of human drawn sketches. For all cases, our system accurately recovers paths that adhere to the semantics of the drawings.Item Neural Denoising for Deep-Z Monte Carlo Renderings(The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhang, Xianyao; Röthlin, Gerhard; Zhu, Shilin; Aydin, Tunç Ozan; Salehi, Farnood; Gross, Markus; Papas, Marios; Bermano, Amit H.; Kalogerakis, EvangelosWe present a kernel-predicting neural denoising method for path-traced deep-Z images that facilitates their usage in animation and visual effects production. Deep-Z images provide enhanced flexibility during compositing as they contain color, opacity, and other rendered data at multiple depth-resolved bins within each pixel. However, they are subject to noise, and rendering until convergence is prohibitively expensive. The current state of the art in deep-Z denoising yields objectionable artifacts, and current neural denoising methods are incapable of handling the variable number of depth bins in deep-Z images. Our method extends kernel-predicting convolutional neural networks to address the challenges stemming from denoising deep-Z images. We propose a hybrid reconstruction architecture that combines the depth-resolved reconstruction at each bin with the flattened reconstruction at the pixel level. Moreover, we propose depth-aware neighbor indexing of the depth-resolved inputs to the convolution and denoising kernel application operators, which reduces artifacts caused by depth misalignment present in deep-Z images. We evaluate our method on a production-quality deep-Z dataset, demonstrating significant improvements in denoising quality and performance compared to the current state-of-the-art deep-Z denoiser. By addressing the significant challenge of the cost associated with rendering path-traced deep-Z images, we believe that our approach will pave the way for broader adoption of deep-Z workflows in future productions.Item Optimizing Stereo-to-Multiview Conversion for Autostereoscopic Displays(The Eurographics Association and John Wiley and Sons Ltd., 2014) Chapiro, Alexandre; Heinzle, Simon; Aydin, Tunç Ozan; Poulakos, Steven; Zwicker, Matthias; Smolic, Aljosa; Gross, Markus; B. Levy and J. KautzWe present a novel stereo-to-multiview video conversion method for glasses-free multiview displays. Different from previous stereo-to-multiview approaches, our mapping algorithm utilizes the limited depth range of autostereoscopic displays optimally and strives to preserve the scene s artistic composition and perceived depth even under strong depth compression. We first present an investigation of how perceived image quality relates to spatial frequency and disparity. The outcome of this study is utilized in a two-step mapping algorithm, where we (i) compress the scene depth using a non-linear global function to the depth range of an autostereoscopic display, and (ii) enhance the depth gradients of salient objects to restore the perceived depth and salient scene structure. Finally, an adapted image domain warping algorithm is proposed to generate the multiview output, which enables overall disparity range extension.