Search Results

Now showing 1 - 10 of 16
  • Item
    Neural Flow Map Reconstruction
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Sahoo, Saroj; Lu, Yuzhe; Berger, Matthew; Borgo, Rita; Marai, G. Elisabeta; Schreck, Tobias
    In this paper we present a reconstruction technique for the reduction of unsteady flow data based on neural representations of time-varying vector fields. Our approach is motivated by the large amount of data typically generated in numerical simulations, and in turn the types of data that domain scientists can generate in situ that are compact, yet useful, for post hoc analysis. One type of data commonly acquired during simulation are samples of the flow map, where a single sample is the result of integrating the underlying vector field for a specified time duration. In our work, we treat a collection of flow map samples for a single dataset as a meaningful, compact, and yet incomplete, representation of unsteady flow, and our central objective is to find a representation that enables us to best recover arbitrary flow map samples. To this end, we introduce a technique for learning implicit neural representations of time-varying vector fields that are specifically optimized to reproduce flow map samples sparsely covering the spatiotemporal domain of the data. We show that, despite aggressive data reduction, our optimization problem - learning a function-space neural network to reproduce flow map samples under a fixed integration scheme - leads to representations that demonstrate strong generalization, both in the field itself, and using the field to approximate the flow map. Through quantitative and qualitative analysis across different datasets we show that our approach is an improvement across a variety of data reduction methods, and across a variety of measures ranging from improved vector fields, flow maps, and features derived from the flow map.
  • Item
    SurfNet: Learning Surface Representations via Graph Convolutional Network
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Han, Jun; Wang, Chaoli; Borgo, Rita; Marai, G. Elisabeta; Schreck, Tobias
    For scientific visualization applications, understanding the structure of a single surface (e.g., stream surface, isosurface) and selecting representative surfaces play a crucial role. In response, we propose SurfNet, a graph-based deep learning approach for representing a surface locally at the node level and globally at the surface level. By treating surfaces as graphs, we leverage a graph convolutional network to learn node embedding on a surface. To make the learned embedding effective, we consider various pieces of information (e.g., position, normal, velocity) for network input and investigate multiple losses. Furthermore, we apply dimensionality reduction to transform the learned embeddings into 2D space for understanding and exploration. To demonstrate the effectiveness of SurfNet, we evaluate the embeddings in node clustering (node-level) and surface selection (surface-level) tasks. We compare SurfNet against state-of-the-art node embedding approaches and surface selection methods. We also demonstrate the superiority of SurfNet by comparing it against a spectral-based mesh segmentation approach. The results show that SurfNet can learn better representations at the node and surface levels with less training time and fewer training samples while generating comparable or better clustering and selection results.
  • Item
    Learning Physics with a Hierarchical Graph Network
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Chentanez, Nuttapong; Jeschke, Stefan; Müller, Matthias; Macklin, Miles; Dominik L. Michels; Soeren Pirk
    We propose a hierarchical graph for learning physics and a novel way to handle obstacles. The finest level of the graph consist of the particles itself. Coarser levels consist of the cells of sparse grids with successively doubling cell sizes covering the volume occupied by the particles. The hierarchical structure allows for the information to propagate at great distance in a single message passing iteration. The novel obstacle handling allows the simulation to be obstacle aware without the need for ghost particles. We train the network to predict effective acceleration produced by multiple sub-steps of 3D multi-material material point method (MPM) simulation consisting of water, sand and snow with complex obstacles. Our network produces lower error, trains up to 7.0X faster and inferences up to 11.3X faster than [SGGP*20]. It is also, on average, about 3.7X faster compared to Taichi Elements simulation running on the same hardware in our tests.
  • Item
    Deep Flow Rendering: View Synthesis via Layer-aware Reflection Flow
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Dai, Pinxuan; Xie, Ning; Ghosh, Abhijeet; Wei, Li-Yi
    Novel view synthesis (NVS) generates images from unseen viewpoints based on a set of input images. It is a challenge because of inaccurate lighting optimization and geometry inference. Although current neural rendering methods have made significant progress, they still struggle to reconstruct global illumination effects like reflections and exhibit ambiguous blurs in highly viewdependent areas. This work addresses high-quality view synthesis to emphasize reflection on non-concave surfaces. We propose Deep Flow Rendering that optimizes direct and indirect lighting separately, leveraging texture mapping, appearance flow, and neural rendering. A learnable texture is used to predict view-independent features, meanwhile enabling efficient reflection extraction. To accurately fit view-dependent effects, we adopt a constrained neural flow to transfer image-space features from nearby views to the target view in an edge-preserving manner. Then we further implement a fusing renderer that utilizes the predictions of both layers to form the output image. The experiments demonstrate that our method outperforms the state-of-theart methods at synthesizing various scenes with challenging reflection effects.
  • Item
    Learning from Shader Program Traces
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Yang, Yuting; Barnes, Connelly; Finkelstein, Adam; Chaine, Raphaëlle; Kim, Min H.
    Deep learning for image processing typically treats input imagery as pixels in some color space. This paper proposes instead to learn from program traces of procedural fragment shaders - programs that generate images. At each pixel, we collect the intermediate values computed at program execution, and these data form the input to the learned model. We investigate this learning task for a variety of applications: our model can learn to predict a low-noise output image from shader programs that exhibit sampling noise; this model can also learn from a simplified shader program that approximates the reference solution with less computation, as well as learn the output of postprocessing filters like defocus blur and edge-aware sharpening. Finally we show that the idea of learning from program traces can even be applied to non-imagery simulations of flocks of boids. Our experiments on a variety of shaders show quantitatively and qualitatively that models learned from program traces outperform baseline models learned from RGB color augmented with hand-picked shader-specific features like normals, depth, and diffuse and specular color. We also conduct a series of analyses that show certain features within the trace are more important, and even learning from a small subset of the trace outperforms the baselines.
  • Item
    Deep Reconstruction of 3D Smoke Densities from Artist Sketches
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Kim, Byungsoo; Huang, Xingchang; Wuelfroth, Laura; Tang, Jingwei; Cordonnier, Guillaume; Gross, Markus; Solenthaler, Barbara; Chaine, Raphaëlle; Kim, Min H.
    Creative processes of artists often start with hand-drawn sketches illustrating an object. Pre-visualizing these keyframes is especially challenging when applied to volumetric materials such as smoke. The authored 3D density volumes must capture realistic flow details and turbulent structures, which is highly non-trivial and remains a manual and time-consuming process. We therefore present a method to compute a 3D smoke density field directly from 2D artist sketches, bridging the gap between early-stage prototyping of smoke keyframes and pre-visualization. From the sketch inputs, we compute an initial volume estimate and optimize the density iteratively with an updater CNN. Our differentiable sketcher is embedded into the end-to-end training, which results in robust reconstructions. Our training data set and sketch augmentation strategy are designed such that it enables general applicability. We evaluate the method on synthetic inputs and sketches from artists depicting both realistic smoke volumes and highly non-physical smoke shapes. The high computational performance and robustness of our method at test time allows interactive authoring sessions of volumetric density fields for rapid prototyping of ideas by novice users.
  • Item
    PERGAMO: Personalized 3D Garments from Monocular Video
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Casado-Elvira, Andrés; Comino Trinidad, Marc; Casas, Dan; Dominik L. Michels; Soeren Pirk
    Clothing plays a fundamental role in digital humans. Current approaches to animate 3D garments are mostly based on realistic physics simulation, however, they typically suffer from two main issues: high computational run-time cost, which hinders their deployment; and simulation-to-real gap, which impedes the synthesis of specific real-world cloth samples. To circumvent both issues we propose PERGAMO, a data-driven approach to learn a deformable model for 3D garments from monocular images. To this end, we first introduce a novel method to reconstruct the 3D geometry of garments from a single image, and use it to build a dataset of clothing from monocular videos. We use these 3D reconstructions to train a regression model that accurately predicts how the garment deforms as a function of the underlying body pose. We show that our method is capable of producing garment animations that match the real-world behavior, and generalizes to unseen body motions extracted from motion capture dataset.
  • Item
    Monocular Facial Performance Capture Via Deep Expression Matching
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Bailey, Stephen W.; Riviere, Jérémy; Mikkelsen, Morten; O'Brien, James F.; Dominik L. Michels; Soeren Pirk
    Facial performance capture is the process of automatically animating a digital face according to a captured performance of an actor. Recent developments in this area have focused on high-quality results using expensive head-scanning equipment and camera rigs. These methods produce impressive animations that accurately capture subtle details in an actor's performance. However, these methods are accessible only to content creators with relatively large budgets. Current methods using inexpensive recording equipment generally produce lower quality output that is unsuitable for many applications. In this paper, we present a facial performance capture method that does not require facial scans and instead animates an artist-created model using standard blendshapes. Furthermore, our method gives artists high-level control over animations through a workflow similar to existing commercial solutions. Given a recording, our approach matches keyframes of the video with corresponding expressions from an animated library of poses. A Gaussian process model then computes the full animation by interpolating from the set of matched keyframes. Our expression-matching method computes a low-dimensional latent code from an image that represents a facial expression while factoring out the facial identity. Images depicting similar facial expressions are identified by their proximity in the latent space. In our results, we demonstrate the fidelity of our expression-matching method. We also compare animations generated with our approach to animations generated with commercially available software.
  • Item
    SDF-StyleGAN: Implicit SDF-Based StyleGAN for 3D Shape Generation
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Zheng, Xinyang; Liu, Yang; Wang, Pengshuai; Tong, Xin; Campen, Marcel; Spagnuolo, Michela
    We present a StyleGAN2-based deep learning approach for 3D shape generation, called SDF-StyleGAN, with the aim of reducing visual and geometric dissimilarity between generated shapes and a shape collection. We extend StyleGAN2 to 3D generation and utilize the implicit signed distance function (SDF) as the 3D shape representation, and introduce two novel global and local shape discriminators that distinguish real and fake SDF values and gradients to significantly improve shape geometry and visual quality. We further complement the evaluation metrics of 3D generative models with the shading-image-based Fréchet inception distance (FID) scores to better assess visual quality and shape distribution of the generated shapes. Experiments on shape generation demonstrate the superior performance of SDF-StyleGAN over the state-of-the-art. We further demonstrate the efficacy of SDFStyleGAN in various tasks based on GAN inversion, including shape reconstruction, shape completion from partial point clouds, single-view image-based shape generation, and shape style editing. Extensive ablation studies justify the efficacy of our framework design. Our code and trained models are available at https://github.com/Zhengxinyang/SDF-StyleGAN.
  • Item
    UnderPressure: Deep Learning for Foot Contact Detection, Ground Reaction Force Estimation and Footskate Cleanup
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Mourot, Lucas; Hoyet, Ludovic; Clerc, François Le; Hellier, Pierre; Dominik L. Michels; Soeren Pirk
    Human motion synthesis and editing are essential to many applications like video games, virtual reality, and film postproduction. However, they often introduce artefacts in motion capture data, which can be detrimental to the perceived realism. In particular, footskating is a frequent and disturbing artefact, which requires knowledge of foot contacts to be cleaned up. Current approaches to obtain foot contact labels rely either on unreliable threshold-based heuristics or on tedious manual annotation. In this article, we address automatic foot contact label detection from motion capture data with a deep learning based method. To this end, we first publicly release UNDERPRESSURE, a novel motion capture database labelled with pressure insoles data serving as reliable knowledge of foot contact with the ground. Then, we design and train a deep neural network to estimate ground reaction forces exerted on the feet from motion data and then derive accurate foot contact labels. The evaluation of our model shows that we significantly outperform heuristic approaches based on height and velocity thresholds and that our approach is much more robust when applied on motion sequences suffering from perturbations like noise or footskate. We further propose a fully automatic workflow for footskate cleanup: foot contact labels are first derived from estimated ground reaction forces. Then, footskate is removed by solving foot constraints through an optimisation-based inverse kinematics (IK) approach that ensures consistency with the estimated ground reaction forces. Beyond footskate cleanup, both the database and the method we propose could help to improve many approaches based on foot contact labels or ground reaction forces, including inverse dynamics problems like motion reconstruction and learning of deep motion models in motion synthesis or character animation. Our implementation, pre-trained model as well as links to database can be found at github.com/InterDigitalInc/UnderPressure.