302 results
Search Results
Now showing 1 - 10 of 302
Item c-Space: Time-evolving 3D Models (4D) from Heterogeneous Distributed Video Sources(The Eurographics Association, 2016) Ritz, Martin; Knuth, Martin; Domajnko, Matevz; Posniak, Oliver; Santos, Pedro; Fellner, Dieter W.; Chiara Eva Catalano and Livio De LucaWe introduce c-Space, an approach to automated 4D reconstruction of dynamic real world scenes, represented as time-evolving 3D geometry streams, available to everyone. Our novel technique solves the problem of fusing all sources, asynchronously captured from multiple heterogeneous mobile devices around a dynamic scene at a real word location. To this end all captured input is broken down into a massive unordered frame set, sorting the frames along a common time axis, and finally discretizing the ordered frame set into a time-sequence of frame subsets, each subject to photogrammetric 3D reconstruction. The result is a time line of 3D models, each representing a snapshot of the scene evolution in 3D at a specific point in time. Just like a movie is a concatenation of time-discrete frames, representing the evolution of a scene in 2D, the 4D frames reconstructed by c-Space line up to form the captured and dynamically changing 3D geometry of an event over time, thus enabling the user to interact with it in the very same way as with a static 3D model. We do image analysis to automatically maximize the quality of results in the presence of challenging, heterogeneous and asynchronous input sources exhibiting a wide quality spectrum. In addition we show how this technique can be integrated as a 4D reconstruction web service module, available to mobile end-users.Item Environment-aware Real-Time Crowd Control(The Eurographics Association, 2012) Henry, Joseph; Shum, Hubert P. H.; Komura, Taku; Jehee Lee and Paul KryReal-time crowd control has become an important research topic due to the recent advancement in console game quality and hardware processing capability. The degrees of freedom of a crowd is much higher than that provided by a standard user input device. As a result most crowd control systems require the user to design the crowd move- ments through multiple passes, such as first specifying the crowd's start and goal points, then providing the agent trajectories with streamlines. Such a multi-pass control would spoil the responsiveness and excitement of real- time games. In this paper, we propose a new, single-pass algorithm to control crowds using a deformable mesh. When controlling crowds, we observe that most of the low level details are related to passive interactions between the crowd and the environment, such as obstacle avoidance and diverging/merging at cross points. Therefore, we simplify the crowd control problem by representing the crowd with a deformable mesh that passively reacts to the environment. As a result, the user can focus on high level control that is more important for context delivery. Our algorithm provides an efficient crowd control framework while maintaining the quality of the simulation, which is useful for real-time applications such as strategy games.Item Stable Orthotropic Materials(The Eurographics Association, 2014) Li, Yijing; Barbic, Jernej; Vladlen Koltun and Eftychios SifakisIsotropic Finite Element Method (FEM) deformable object simulations are widely used in computer graphics. Several applications (wood, plants, muscles) require modeling the directional dependence of the material elastic properties in three orthogonal directions. We investigate orthotropic materials, a special class of anisotropic materials where the shear stresses are decoupled from normal stresses. Orthotropic materials generalize transversely isotropic materials, by exhibiting different stiffnesses in three orthogonal directions. Orthotropic materials are, however, parameterized by nine values that are difficult to tune in practice, as poorly adjusted settings easily lead to simulation instabilities. We present a user-friendly approach to setting these parameters that is guaranteed to be stable. Our approach is intuitive as it extends the familiar intuition known from isotropic materials. We demonstrate our technique by augmenting linear corotational FEM implementations with orthotropic materials.Item Interactive Low-Cost Wind Simulation For Cities(The Eurographics Association, 2016) Rando, Eduard; Muñoz, Imanol; Patow, Gustavo; Vincent Tourre and Filip BiljeckiWind is an ubiquitous phenomenon on earth, and its behavior is well studied in many fields. However, its study inside a urban landscape remains an elusive target for large areas given the high complexity of the interactions between wind and buildings. In this paper we propose a lightweight 2D wind simulation in cities that is efficient enough to run at interactive frame-rates, but also accurate enough to provide some prediction capabilities. The proposed algorithm is based on the Lattice-Boltzmann Method (LBM), which consists of a regular lattice that represents the fluid in discrete locations, and a set of equations to simulate its flow. We perform all the computations of the LBM in CUDA on graphics processors for accelerating the calculations.Item Multi-Domain Real-time Planning in Dynamic Environments(ACM SIGGRAPH / Eurographics Association, 2013) Kapadia, Mubbasir; Beacco, Alejandro; Garcia, Francisco; Reddy, Vivek; Pelechano, Nuria; Badler, Norman I.; Theodore Kim and Robert SumnerThis paper presents a real-time planning framework for multicharacter navigation that enables the use of multiple heterogeneous problem domains of differing complexities for navigation in large, complex, dynamic virtual environments. The original navigation problem is decomposed into a set of smaller problems that are distributed across planning tasks working in these different domains. An anytime dynamic planner is used to efficiently compute and repair plans for each of these tasks, while using plans in one domain to focus and accelerate searches in more complex domains. We demonstrate the benefits of our framework by solving many challenging multi-agent scenarios in complex dynamic environments requiring space-time precision and explicit coordination between interacting agents, by accounting for dynamic information at all stages of the decision-making process.Item Learning a Style Space for Interactive Line Drawing Synthesis from Animated 3D Models(The Eurographics Association, 2022) Wang, Zeyu; Wang, Tuanfeng Y.; Dorsey, Julie; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-TakMost non-photorealistic rendering (NPR) methods for line drawing synthesis operate on a static shape. They are not tailored to process animated 3D models due to extensive per-frame parameter tuning needed to achieve the intended look and natural transition. This paper introduces a framework for interactive line drawing synthesis from animated 3D models based on a learned style space for drawing representation and interpolation. We refer to style as the relationship between stroke placement in a line drawing and its corresponding geometric properties. Starting from a given sequence of an animated 3D character, a user creates drawings for a set of keyframes. Our system embeds the raster drawings into a latent style space after they are disentangled from the underlying geometry. By traversing the latent space, our system enables a smooth transition between the input keyframes. The user may also edit, add, or remove the keyframes interactively, similar to a typical keyframe-based workflow. We implement our system with deep neural networks trained on synthetic line drawings produced by a combination of NPR methods. Our drawing-specific supervision and optimization-based embedding mechanism allow generalization from NPR line drawings to user-created drawings during run time. Experiments show that our approach generates high-quality line drawing animations while allowing interactive control of the drawing style across frames.Item A Virtual Character Posing System based on Reconfigurable Tangible User Interfaces and Immersive Virtual Reality(The Eurographics Association, 2018) Cannavò, A.; Lamberti, F.; Livesu, Marco and Pintore, Gianni and Signoroni, AlbertoComputer animation and, particularly, virtual character animation, are very time consuming and skill-intensive tasks, which require animators to work with sophisticated user interfaces. Tangible user interfaces (TUIs) already proved to be capable of making character animation more intuitive, and possibly more efficient, by leveraging the affordances provided by physical props that mimic the structure of virtual counterparts. The main downside of existing TUI-based animation solutions is the reduced accuracy, which is due partly to the use of mechanical parts, partly to the fact that, despite the adoption of a 3D input, users still have to work with a 2D output (usually represented by one or more views displayed on a screen). However, output methods that are natively 3D, e.g., based on virtual reality (VR), have been already exploited in different ways within computer animation scenarios. By moving from the above considerations and by building upon an existing work, this paper proposes a VR-based character animation system that combines the advantages of TUIs with the improved spatial awareness, enhanced visualization and better control on the observation point in the virtual space ensured by immersive VR. Results of a user study with both skilled and unskilled users showed a marked preference for the devised system, which was judged as more intuitive than that in the reference work, and allowed users to pose a virtual character in a lower time and with a higher accuracy.Item Fully Convolutional Graph Neural Networks for Parametric Virtual Try-On(The Eurographics Association and John Wiley & Sons Ltd., 2020) Vidaurre, Raquel; Santesteban, Igor; Garces, Elena; Casas, Dan; Bender, Jan and Popa, TiberiuWe present a learning-based approach for virtual try-on applications based on a fully convolutional graph neural network. In contrast to existing data-driven models, which are trained for a specific garment or mesh topology, our fully convolutional model can cope with a large family of garments, represented as parametric predefined 2D panels with arbitrary mesh topology, including long dresses, shirts, and tight tops. Under the hood, our novel geometric deep learning approach learns to drape 3D garments by decoupling the three different sources of deformations that condition the fit of clothing: garment type, target body shape, and material. Specifically, we first learn a regressor that predicts the 3D drape of the input parametric garment when worn by a mean body shape. Then, after a mesh topology optimization step where we generate a sufficient level of detail for the input garment type, we further deform the mesh to reproduce deformations caused by the target body shape. Finally, we predict fine-scale details such as wrinkles that depend mostly on the garment material. We qualitatively and quantitatively demonstrate that our fully convolutional approach outperforms existing methods in terms of generalization capabilities and memory requirements, and therefore it opens the door to more general learning-based models for virtual try-on applications.Item Quaternion Space Sparse Decomposition for Motion Compression and Retrieval(The Eurographics Association, 2012) Zhu, Mingyang; Sun, Huaijiang; Deng, Zhigang; Jehee Lee and Paul KryQuaternion has become one of the most widely used representations for rotational transformations in 3D graphics for decades. Due to the sparse nature of human motion in both the spatial domain and the temporal domain, an unexplored yet challenging research problem is how to directly represent intrinsically sparse human motion data in quaternion space. In this paper we propose a novel quaternion space sparse decomposition (QSSD) model that decomposes human rotational motion data into two meaningful parts (namely, the dictionary part and the weight part) with the sparseness constraint on the weight part. Specifically, a linear combination (addition) operation in Euclidean space is equivalently modeled as a quaternion multiplication operation, and the weight of linear combination is modeled as a power operation on quaternion. Besides validations of the robustness, convergence, and accuracy of the QSSD model, we also demonstrate its two selected applications: human motion data compression and content-based human motion retrieval. Through numerous experiments and quantitative comparisons, we demonstrate that the QSSD-based approaches can soundly outperform existing state-of-the-art human motion compression and retrieval approaches.Item Brownian Dynamics Simulation on the GPU: Virtual Colloidal Suspensions(The Eurographics Association, 2015) Tran, Công Tâm; Crespin, Benoît; Cerbelaud, Manuella; Videcoq, Arnaud; Fabrice Jaillet and Florence Zara and Gabriel ZachmannBrownian Dynamics simulations are frequently used to describe and study the motion and aggregation of colloidal particles, in the field of soft matter and material science. In this paper, we focus on the problem of neighbourhood search to accelerate computations on a single GPU. Our approach for one kind of particle outperforms existing implementations by introducing a novel dynamic test. For bimodal size distributions we also introduce a new algorithm that separates computations for large and small particles, in order to avoid additional friction that is known to restrict diffusive displacements.