103 results
Search Results
Now showing 1 - 10 of 103
Item Sketching Vocabulary for Crowd Motion(The Eurographics Association and John Wiley & Sons Ltd., 2022) Mathew, C. D. Tharindu; Benes, Bedrich; Aliaga, Daniel; Dominik L. Michels; Soeren PirkThis paper proposes and evaluates a sketching language to author crowd motion. It focuses on the path, speed, thickness, and density parameters of crowd motion. A sketch-based vocabulary is proposed for each parameter and evaluated in a user study against complex crowd scenes. A sketch recognition pipeline converts the sketches into a crowd simulation. The user study results show that 1) participants at various skill levels and can draw accurate crowd motion through sketching, 2) certain sketch styles lead to a more accurate representation of crowd parameters, and 3) sketching allows to produce complex crowd motions in a few seconds. The results show that some styles although accurate actually are less preferred over less accurate ones.Item N-Cloth: Predicting 3D Cloth Deformation with Mesh-Based Networks(The Eurographics Association and John Wiley & Sons Ltd., 2022) Li, Yu Di; Tang, Min; Yang, Yun; Huang, Zi; Tong, Ruo Feng; Yang, Shuang Cai; Li, Yao; Manocha, Dinesh; Chaine, Raphaëlle; Kim, Min H.We present a novel mesh-based learning approach (N-Cloth) for plausible 3D cloth deformation prediction. Our approach is general and can handle cloth or obstacles represented by triangle meshes with arbitrary topologies.We use graph convolution to transform the cloth and object meshes into a latent space to reduce the non-linearity in the mesh space. Our network can predict the target 3D cloth mesh deformation based on the initial state of the cloth mesh template and the target obstacle mesh. Our approach can handle complex cloth meshes with up to 100K triangles and scenes with various objects corresponding to SMPL humans, non-SMPL humans or rigid bodies. In practice, our approach can be used to generate plausible cloth simulation at 30??45 fps on an NVIDIA GeForce RTX 3090 GPU. We highlight its benefits over prior learning-based methods and physicallybased cloth simulators.Item Neural Flow Map Reconstruction(The Eurographics Association and John Wiley & Sons Ltd., 2022) Sahoo, Saroj; Lu, Yuzhe; Berger, Matthew; Borgo, Rita; Marai, G. Elisabeta; Schreck, TobiasIn this paper we present a reconstruction technique for the reduction of unsteady flow data based on neural representations of time-varying vector fields. Our approach is motivated by the large amount of data typically generated in numerical simulations, and in turn the types of data that domain scientists can generate in situ that are compact, yet useful, for post hoc analysis. One type of data commonly acquired during simulation are samples of the flow map, where a single sample is the result of integrating the underlying vector field for a specified time duration. In our work, we treat a collection of flow map samples for a single dataset as a meaningful, compact, and yet incomplete, representation of unsteady flow, and our central objective is to find a representation that enables us to best recover arbitrary flow map samples. To this end, we introduce a technique for learning implicit neural representations of time-varying vector fields that are specifically optimized to reproduce flow map samples sparsely covering the spatiotemporal domain of the data. We show that, despite aggressive data reduction, our optimization problem - learning a function-space neural network to reproduce flow map samples under a fixed integration scheme - leads to representations that demonstrate strong generalization, both in the field itself, and using the field to approximate the flow map. Through quantitative and qualitative analysis across different datasets we show that our approach is an improvement across a variety of data reduction methods, and across a variety of measures ranging from improved vector fields, flow maps, and features derived from the flow map.Item SurfNet: Learning Surface Representations via Graph Convolutional Network(The Eurographics Association and John Wiley & Sons Ltd., 2022) Han, Jun; Wang, Chaoli; Borgo, Rita; Marai, G. Elisabeta; Schreck, TobiasFor scientific visualization applications, understanding the structure of a single surface (e.g., stream surface, isosurface) and selecting representative surfaces play a crucial role. In response, we propose SurfNet, a graph-based deep learning approach for representing a surface locally at the node level and globally at the surface level. By treating surfaces as graphs, we leverage a graph convolutional network to learn node embedding on a surface. To make the learned embedding effective, we consider various pieces of information (e.g., position, normal, velocity) for network input and investigate multiple losses. Furthermore, we apply dimensionality reduction to transform the learned embeddings into 2D space for understanding and exploration. To demonstrate the effectiveness of SurfNet, we evaluate the embeddings in node clustering (node-level) and surface selection (surface-level) tasks. We compare SurfNet against state-of-the-art node embedding approaches and surface selection methods. We also demonstrate the superiority of SurfNet by comparing it against a spectral-based mesh segmentation approach. The results show that SurfNet can learn better representations at the node and surface levels with less training time and fewer training samples while generating comparable or better clustering and selection results.Item Dressi: A Hardware-Agnostic Differentiable Renderer with Reactive Shader Packing and Soft Rasterization(The Eurographics Association and John Wiley & Sons Ltd., 2022) Takimoto, Yusuke; Sato, Hiroyuki; Takehara, Hikari; Uragaki, Keishiro; Tawara, Takehiro; Liang, Xiao; Oku, Kentaro; Kishimoto, Wataru; Zheng, Bo; Chaine, Raphaëlle; Kim, Min H.Differentiable rendering (DR) enables various computer graphics and computer vision applications through gradient-based optimization with derivatives of the rendering equation. Most rasterization-based approaches are built on general-purpose automatic differentiation (AD) libraries and DR-specific modules handcrafted using CUDA. Such a system design mixes DR algorithm implementation and algorithm building blocks, resulting in hardware dependency and limited performance. In this paper, we present a practical hardware-agnostic differentiable renderer called Dressi, which is based on a new full AD design. The DR algorithms of Dressi are fully written in our Vulkan-based AD for DR, Dressi-AD, which supports all primitive operations for DR. Dressi-AD and our inverse UV technique inside it bring hardware independence and acceleration by graphics hardware. Stage packing, our runtime optimization technique, can adapt hardware constraints and efficiently execute complex computational graphs of DR with reactive cache considering the render pass hierarchy of Vulkan. HardSoftRas, our novel rendering process, is designed for inverse rendering with a graphics pipeline. Under the limited functionalities of the graphics pipeline, HardSoftRas can propagate the gradients of pixels from the screen space to far-range triangle attributes. Our experiments and applications demonstrate that Dressi establishes hardware independence, high-quality and robust optimization with fast speed, and photorealistic rendering.Item Automatic Differentiable Procedural Modeling(The Eurographics Association and John Wiley & Sons Ltd., 2022) Gaillard, Mathieu; Krs, Vojtech; Gori, Giorgio; Mech, Radomír; Benes, Bedrich; Chaine, Raphaëlle; Kim, Min H.Procedural modeling allows for an automatic generation of large amounts of similar assets, but there is limited control over the generated output. We address this problem by introducing Automatic Differentiable Procedural Modeling (ADPM). The forward procedural model generates a final editable model. The user modifies the output interactively, and the modifications are transferred back to the procedural model as its parameters by solving an inverse procedural modeling problem. We present an auto-differentiable representation of the procedural model that significantly accelerates optimization. In ADPM the procedural model is always available, all changes are non-destructive, and the user can interactively model the 3D object while keeping the procedural representation. ADPM provides the user with precise control over the resulting model comparable to non-procedural interactive modeling. ADPM is node-based, and it generates hierarchical 3D scene geometry converted to a differentiable computational graph. Our formulation focuses on the differentiability of high-level primitives and bounding volumes of components of the procedural model rather than the detailed mesh geometry. Although this high-level formulation limits the expressiveness of user edits, it allows for efficient derivative computation and enables interactivity. We designed a new optimizer to solve for inverse procedural modeling. It can detect that an edit is under-determined and has degrees of freedom. Leveraging cheap derivative evaluation, it can explore the region of optimality of edits and suggest various configurations, all of which achieve the requested edit differently. We show our system's efficiency on several examples, and we validate it by a user study.Item Volumetric Multi-View Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2022) Fraboni, Basile; Webanck, Antoine; Bonneel, Nicolas; Iehl, Jean-Claude; Chaine, Raphaëlle; Kim, Min H.Rendering photo-realistic images using Monte Carlo path tracing often requires sampling a large number of paths to reach acceptable levels of noise. This is particularly the case when rendering participating media, that complexify light paths with multiple scattering events. Our goal is to accelerate the rendering of heterogeneous participating media by exploiting redundancy across views, for instance when rendering animated camera paths, motion blur in consecutive frames or multi-view images such as lenticular or light-field images. This poses a challenge as existing methods for sharing light paths across views cannot handle heterogeneous participating media and classical estimators are not optimal in this context. We address these issues by proposing three key ideas. First, we propose new volume shift mappings to transform light paths from one view to another within the recently introduced null-scattering framework, taking into account changes in density along the transformed path. Second, we generate a shared path suffix that best contributes to a subset of views, thus effectively reducing variance. Third, we introduce the multiple weighted importance sampling estimator that benefits from multiple importance sampling for combining sampling strategies, and from weighted importance sampling for reducing the variance due to non contributing strategies. We observed significant reuse when views largely overlap, with no visible bias and reduced variance compared to regular path tracing at equal time. Our method further readily integrates into existing volumetric path tracing pipelines.Item Coverage Axis: Inner Point Selection for 3D Shape Skeletonization(The Eurographics Association and John Wiley & Sons Ltd., 2022) Dou, Zhiyang; Lin, Cheng; Xu, Rui; Yang, Lei; Xin, Shiqing; Komura, Taku; Wang, Wenping; Chaine, Raphaëlle; Kim, Min H.In this paper, we present a simple yet effective formulation called Coverage Axis for 3D shape skeletonization. Inspired by the set cover problem, our key idea is to cover all the surface points using as few inside medial balls as possible. This formulation inherently induces a compact and expressive approximation of the Medial Axis Transform (MAT) of a given shape. Different from previous methods that rely on local approximation error, our method allows a global consideration of the overall shape structure, leading to an efficient high-level abstraction and superior robustness to noise. Another appealing aspect of our method is its capability to handle more generalized input such as point clouds and poor-quality meshes. Extensive comparisons and evaluations demonstrate the remarkable effectiveness of our method for generating compact and expressive skeletal representation to approximate the MAT.Item Precise High-order Meshing of 2D Domains with Rational Bézier Curves(The Eurographics Association and John Wiley & Sons Ltd., 2022) Yang, Jinlin; Liu, Shibo; Chai, Shuangming; Liu, Ligang; Fu, Xiao-Ming; Campen, Marcel; Spagnuolo, MichelaWe propose a novel method to generate a high-order triangular mesh for an input 2D domain with two key characteristics: (1) the mesh precisely conforms to a set of input piecewise rational domain curves, and (2) the geometric map on each curved triangle is injective. Central to the algorithm is a new sufficient condition for placing control points of a rational Bézier triangle to guarantee that the conformance and injectivity constraints are theoretically satisfied. Taking advantage of this condition, we provide an explicit construct that robustly creates higher-order 2D meshes satisfying the two characteristics. We demonstrate the robustness and effectiveness of our algorithm over a data set containing 2200 examples.Item User-Controllable Latent Transformer for StyleGAN Image Layout Editing(The Eurographics Association and John Wiley & Sons Ltd., 2022) Endo, Yuki; Umetani, Nobuyuki; Wojtan, Chris; Vouga, EtienneLatent space exploration is a technique that discovers interpretable latent directions and manipulates latent codes to edit various attributes in images generated by generative adversarial networks (GANs). However, in previous work, spatial control is limited to simple transformations (e.g., translation and rotation), and it is laborious to identify appropriate latent directions and adjust their parameters. In this paper, we tackle the problem of editing the StyleGAN image layout by annotating the image directly. To do so, we propose an interactive framework for manipulating latent codes in accordance with the user inputs. In our framework, the user annotates a StyleGAN image with locations they want to move or not and specifies a movement direction by mouse dragging. From these user inputs and initial latent codes, our latent transformer based on a transformer encoderdecoder architecture estimates the output latent codes, which are fed to the StyleGAN generator to obtain a result image. To train our latent transformer, we utilize synthetic data and pseudo-user inputs generated by off-the-shelf StyleGAN and optical flow models, without manual supervision. Quantitative and qualitative evaluations demonstrate the effectiveness of our method over existing methods.