18 results
Search Results
Now showing 1 - 10 of 18
Item Learning-Based Animation of Clothing for Virtual Try-On(The Eurographics Association and John Wiley & Sons Ltd., 2019) Santesteban, Igor; Otaduy, Miguel A.; Casas, Dan; Alliez, Pierre and Pellacini, FabioThis paper presents a learning-based clothing animation method for highly efficient virtual try-on simulation. Given a garment, we preprocess a rich database of physically-based dressed character simulations, for multiple body shapes and animations. Then, using this database, we train a learning-based model of cloth drape and wrinkles, as a function of body shape and dynamics. We propose a model that separates global garment fit, due to body shape, from local garment wrinkles, due to both pose dynamics and body shape. We use a recurrent neural network to regress garment wrinkles, and we achieve highly plausible nonlinear effects, in contrast to the blending artifacts suffered by previous methods. At runtime, dynamic virtual try-on animations are produced in just a few milliseconds for garments with thousands of triangles. We show qualitative and quantitative analysis of results.Item Data-Driven Simulation Methods in Computer Graphics: Cloth, Tissue and Faces(The Eurographics Association, 2013) Otaduy, Miguel A.; Bickel, Bernd; Bradley, Derek; Diego Gutierrez and Karol MyszkowskiIn recent years, the field of computer animation has witnessed the invention of multiple simulation methods that exploit pre-recorded data to improve the performance and/or realism of dynamic deformations. Various methods have been presented concurrently, and they present differences, but also similarities, that have not yet been analyzed or discussed. This course focuses on the application of data-driven methods to three areas of computer animation, namely dynamic deformation of faces, soft volumetric tissue, and cloth. The course describes the particular challenges tackled in a data-driven manner, classifies the various methods, and also shares insights for the application to other settings. The explosion of data-driven animation methods and the success of their resultsmake this course extremely timely. Up till now, the proposed methods have remained familiar only at the research context, and have not made their way through computer graphics industry. This course aims to fit two main purposes. First, present a common theory and understanding of data-driven methods for dynamic deformations that may inspire the development of novel solutions, and second, bridge the gap with industry, by making data-driven approaches accessible. The course targets an audience consisting of both researchers and programmers in computer animation.Item DYVERSO: A Versatile Multi‐Phase Position‐Based Fluids Solution for VFX(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Alduán, Iván; Tena, Angel; Otaduy, Miguel A.; Chen, Min and Zhang, Hao (Richard)Many impressive fluid simulation methods have been presented in research papers before. These papers typically focus on demonstrating particular innovative features, but they do not meet in a comprehensive manner the production demands of actual VFX pipelines. VFX artists seek methods that are flexible, efficient, robust and scalable, and these goals often conflict with each other. In this paper, we present a multi‐phase particle‐based fluid simulation framework, based on the well‐known Position‐Based Fluids (PBF) method, designed to address VFX production demands. Our simulation framework handles multi‐phase interactions robustly thanks to a modified constraint formulation for density contrast PBF. And, it also supports the interaction of fluids sampled at different resolutions. We put special care on data structure design and implementation details. Our framework highlights cache‐efficient GPU‐friendly data structures, an improved spatial voxelization technique based on Z‐index sorting, tuned‐up simulation algorithms and two‐way‐coupled collision handling based on VDB fields. Altogether, our fluid simulation framework empowers artists with the efficiency, scalability and versatility needed for simulating very diverse scenes and effects.Many impressive fluid simulation methods have been presented in research papers before. These papers typically focus on demonstrating particular innovative features, but they do not meet in a comprehensive manner the production demands of actual VFX pipelines. VFX artists seek methods that are flexible, efficient, robust and scalable, and these goals often conflict with each other. In this paper, we present a multi‐phase particle‐based fluid simulation framework, based on the well‐known Position‐Based Fluids (PBF) method, designed to address VFX production demands.Item Efficient Collision Detection for Brittle Fracture(The Eurographics Association, 2012) Glondu, Loeiz; Schvartzman, Sara C.; Marchal, Maud; Dumont, Georges; Otaduy, Miguel A.; Jehee Lee and Paul KryIn complex scenes with many objects, collision detection plays a key role in the simulation performance. This is particularly true for fracture simulation, where multiple new objects are dynamically created. In this paper, we present novel algorithms and data structures for collision detection in real-time brittle fracture simulations. We build on a combination of well-known efficient data structures, namely distance fields and sphere trees, making our algorithm easy to integrate on existing simulation engines. We propose novel methods to construct these data structures, such that they can be efficiently updated upon fracture events and integrated in a simple yet effective self-adapting contact selection algorithm. Altogether, we drastically reduce the cost of both collision detection and collision response. We have evaluated our global solution for collision detection on challenging scenarios, achieving high frame rates suited for hard real-time applications such as video games or haptics. Our solution opens promising perspectives for complex brittle fracture simulations involving many dynamically created objects.Item Efficient Simulation of Knitted Cloth Using Persistent Contacts(ACM Siggraph, 2015) Cirio, Gabriel; Lopez-Moreno, Jorge; Otaduy, Miguel A.; Florence Bertails-Descoubes and Stelian Coros and Shinjiro SuedaKnitted cloth is made of yarns that are stitched in regular patterns, and its macroscopic behavior is dictated by the contact interactions between such yarns. We propose an efficient representation of knitted cloth at the yarn level that treats yarn-yarn contacts as persistent, thereby avoiding expensive contact handling altogether. We introduce a compact representation of yarn geometry and kinematics, capturing the essential deformation modes of yarn loops and stitches with a minimum cost. Based on this representation, we design force models that reproduce the characteristic macroscopic behavior of knitted fabrics. We demonstrate the efficiency of our method on simulations with millions of degrees of freedom (hundreds of thousands of yarn loops), almost one order of magnitude faster than previous techniques.Item Fast Deformation of Volume Data Using Tetrahedral Mesh Rasterization(ACM SIGGRAPH / Eurographics Association, 2013) Gascon, Jorge; Espadero, Jose M.; Perez, Alvaro G.; Torres, Rosell; Otaduy, Miguel A.; Theodore Kim and Robert SumnerMany inherently deformable structures, such as human anatomy, are often represented using a regular volumetric discretization, e.g., in medical imaging. While deformation algorithms employ discretizations that deform themselves along with the material, visualization algorithms are optimized for regular undeformed discretizations. In this paper, we propose a method to transform highresolution volume data embedded in a deformable tetrahedral mesh. We cast volume deformation as a problem of tetrahedral rasterization with 3D texture mapping. Then, the core of our solution to volume data deformation is a very fast algorithm for tetrahedral rasterization. We perform rasterization as a massively parallel operation on target voxels, and we minimize the number of voxels to be handled using a multi-resolution culling approach. Our method allows the deformation of volume data with over 20 million voxels at interactive rates.Item FASTCD: Fracturing-Aware Stable Collision Detection(The Eurographics Association, 2010) Heo, Jae-Pil; Seong, Joon-Kyung; Kim, DukSu; Otaduy, Miguel A.; Hong, Jeong-Mo; Tang, Min; Yoon, Sung-Eui; MZoran Popovic and Miguel OtaduyWe present a collision detection (CD) method for complex and large-scale fracturing models that have geometric and topological changes. We first propose a novel dual-cone culling method to improve the performance of CD, especially self-collision detection among fracturing models. Our dual-cone culling method has a small computational overhead and a conservative algorithm. Combined with bounding volume hierarchies (BVHs), our dual-cone culling method becomes approximate. However, we found that our method does not miss any collisions in the tested benchmarks. We also propose a novel, selective restructuring method that improves the overall performance of CD and reduces performance degradations at fracturing events. Our restructuring method is based on a culling efficiency metric that measures the expected number of overlap tests of a BVH. To further reduce the performance degradations at fracturing events, we also propose a novel, fast BVH construction method that builds multiple levels of the hierarchy in one iteration using a grid and hashing. We test our method with four different large-scale deforming benchmarks. Compared to the state-of-the-art methods, our method shows a more stable performance for CD by improving the performance by a factor of up to two orders of magnitude at frames when deforming models change their mesh topologiesItem Dissipation Potentials for Yarn-Level Cloth(The Eurographics Association, 2017) Sánchez-Banderas, Rosa M.; Otaduy, Miguel A.; Fco. Javier Melero and Nuria PelechanoDamping is a critical phenomenon in determining the dynamic behavior of animated objects. For yarn-level cloth models, setting the correct damping behavior is particularly complicated, because common damping models in computer graphics do not account for the mixed Eulerian-Lagrangian discretization of efficient yarn-level models. In this paper, we show how to derive a damping model for yarn-level cloth from dissipation potentials. We develop specific formulations for the deformation modes present in yarn-level cloth, circumventing various numerical difficulties. We show that the proposed model enables independent control of the damping behavior of each deformation mode, unlike other previous models.Item Data-Driven Estimation of Cloth Simulation Models(The Eurographics Association and John Wiley and Sons Ltd., 2012) Miguel, Eder; Bradley, Derek; Thomaszewski, Bernhard; Bickel, Bernd; Matusik, Wojciech; Otaduy, Miguel A.; Marschner, Steve; P. Cignoni and T. ErtlProgress in cloth simulation for computer animation and apparel design has led to a multitude of deformation models, each with its own way of relating geometry, deformation, and forces. As simulators improve, differences between these models become more important, but it is difficult to choose a model and a set of parameters to match a given real material simply by looking at simulation results. This paper provides measurement and fitting methods that allow nonlinear models to be fit to the observed deformation of a particular cloth sample. Unlike standard textile testing, our system measures complex 3D deformations of a sheet of cloth, not just one-dimensional force-displacement curves, so it works under a wider range of deformation conditions. The fitted models are then evaluated by comparison to measured deformations with motions very different from those used for fitting.Item Sparse GPU Voxelization of Yarn‐Level Cloth(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Lopez‐Moreno, Jorge; Miraut, David; Cirio, Gabriel; Otaduy, Miguel A.; Chen, Min and Zhang, Hao (Richard)Most popular methods in cloth rendering rely on volumetric data in order to model complex optical phenomena such as sub‐surface scattering. These approaches are able to produce very realistic illumination results, but their volumetric representations are costly to compute and render, forfeiting any interactive feedback. In this paper, we introduce a method based on the Graphics Processing Unit (GPU) for voxelization and visualization, suitable for both interactive and offline rendering. Recent features in the OpenGL model, like the ability to dynamically address arbitrary buffers and allocate bindless textures, are combined into our pipeline to interactively voxelize millions of polygons into a set of large three‐dimensional (3D) textures (>10 elements), generating a volume with sub‐voxel accuracy, which is suitable even for high‐density woven cloth such as linen.Most popular methods in cloth rendering rely on volumetric data in order to model complex optical phenomena such as sub‐surface scattering. These approaches are able to produce very realistic illumination results, but their volumetric representations are costly to compute and render, forfeiting any interactive feedback. In this paper, we introduce a method based on the GPU for voxelization and visualization, suitable for both interactive and offline rendering. Recent features in the OpenGL model, like the ability to dynamically address arbitrary buffers and allocate bindless textures, are combined into our pipeline to interactively voxelize millions of polygons into a set of large three‐dimensional (3D) textures (>10 elements), generating a volume with sub‐voxel accuracy, which is suitable even for high‐density woven cloth such as linen.