100 results
Search Results
Now showing 1 - 10 of 100
Item Saliency Clouds: Visual Analysis of Point Cloud-oriented Deep Neural Networks in DeepRL for Particle Physics(The Eurographics Association, 2022) Mulawade, Raju Ningappa; Garth, Christoph; Wiebel, Alexander; Archambault, Daniel; Nabney, Ian; Peltonen, JaakkoWe develop and describe saliency clouds, that is, visualization methods employing explainable AI methods to analyze and interpret deep reinforcement learning (DeepRL) agents working on point cloud-based data. The agent in our application case is tasked to track particles in high energy physics and is still under development. The point clouds contain properties of particle hits on layers of a detector as the input to reconstruct the trajectories of the particles. Through visualization of the influence of different points, their possible connections in an implicit graph, and other features on the decisions of the policy network of the DeepRL agent, we aim to explain the decision making of the agent in tracking particles and thus support its development. In particular, we adapt gradient-based saliency mapping methods to work on these point clouds. We show how the properties of the methods, which were developed for image data, translate to the structurally different point cloud data. Finally, we present visual representations of saliency clouds supporting visual analysis and interpretation of the RL agent's policy network.Item Variational Pose Prediction with Dynamic Sample Selection from Sparse Tracking Signals(The Eurographics Association and John Wiley & Sons Ltd., 2023) Milef, Nicholas; Sueda, Shinjiro; Kalantari, Nima Khademi; Myszkowski, Karol; Niessner, MatthiasWe propose a learning-based approach for full-body pose reconstruction from extremely sparse upper body tracking data, obtained from a virtual reality (VR) device. We leverage a conditional variational autoencoder with gated recurrent units to synthesize plausible and temporally coherent motions from 4-point tracking (head, hands, and waist positions and orientations). To avoid synthesizing implausible poses, we propose a novel sample selection and interpolation strategy along with an anomaly detection algorithm. Specifically, we monitor the quality of our generated poses using the anomaly detection algorithm and smoothly transition to better samples when the quality falls below a statistically defined threshold. Moreover, we demonstrate that our sample selection and interpolation method can be used for other applications, such as target hitting and collision avoidance, where the generated motions should adhere to the constraints of the virtual environment. Our system is lightweight, operates in real-time, and is able to produce temporally coherent and realistic motions.Item G-Style: Stylized Gaussian Splatting(The Eurographics Association and John Wiley & Sons Ltd., 2024) Kovács, Áron Samuel; Hermosilla, Pedro; Raidou, Renata Georgia; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyWe introduce G -Style, a novel algorithm designed to transfer the style of an image onto a 3D scene represented using Gaussian Splatting. Gaussian Splatting is a powerful 3D representation for novel view synthesis, as-compared to other approaches based on Neural Radiance Fields-it provides fast scene renderings and user control over the scene. Recent pre-prints have demonstrated that the style of Gaussian Splatting scenes can be modified using an image exemplar. However, since the scene geometry remains fixed during the stylization process, current solutions fall short of producing satisfactory results. Our algorithm aims to address these limitations by following a three-step process: In a pre-processing step, we remove undesirable Gaussians with large projection areas or highly elongated shapes. Subsequently, we combine several losses carefully designed to preserve different scales of the style in the image, while maintaining as much as possible the integrity of the original scene content. During the stylization process and following the original design of Gaussian Splatting, we split Gaussians where additional detail is necessary within our scene by tracking the gradient of the stylized color. Our experiments demonstrate that G -Style generates high-quality stylizations within just a few minutes, outperforming existing methods both qualitatively and quantitativelyItem Neural Flow Map Reconstruction(The Eurographics Association and John Wiley & Sons Ltd., 2022) Sahoo, Saroj; Lu, Yuzhe; Berger, Matthew; Borgo, Rita; Marai, G. Elisabeta; Schreck, TobiasIn this paper we present a reconstruction technique for the reduction of unsteady flow data based on neural representations of time-varying vector fields. Our approach is motivated by the large amount of data typically generated in numerical simulations, and in turn the types of data that domain scientists can generate in situ that are compact, yet useful, for post hoc analysis. One type of data commonly acquired during simulation are samples of the flow map, where a single sample is the result of integrating the underlying vector field for a specified time duration. In our work, we treat a collection of flow map samples for a single dataset as a meaningful, compact, and yet incomplete, representation of unsteady flow, and our central objective is to find a representation that enables us to best recover arbitrary flow map samples. To this end, we introduce a technique for learning implicit neural representations of time-varying vector fields that are specifically optimized to reproduce flow map samples sparsely covering the spatiotemporal domain of the data. We show that, despite aggressive data reduction, our optimization problem - learning a function-space neural network to reproduce flow map samples under a fixed integration scheme - leads to representations that demonstrate strong generalization, both in the field itself, and using the field to approximate the flow map. Through quantitative and qualitative analysis across different datasets we show that our approach is an improvement across a variety of data reduction methods, and across a variety of measures ranging from improved vector fields, flow maps, and features derived from the flow map.Item NEnv: Neural Environment Maps for Global Illumination(The Eurographics Association and John Wiley & Sons Ltd., 2023) Rodriguez-Pardo, Carlos; Fabre, Javier; Garces, Elena; Lopez-Moreno, Jorge; Ritschel, Tobias; Weidlich, AndreaEnvironment maps are commonly used to represent and compute far-field illumination in virtual scenes. However, they are expensive to evaluate and sample from, limiting their applicability to real-time rendering. Previous methods have focused on compression through spherical-domain approximations, or on learning priors for natural, day-light illumination. These hinder both accuracy and generality, and do not provide the probability information required for importance-sampling Monte Carlo integration. We propose NEnv, a deep-learning fully-differentiable method, capable of compressing and learning to sample from a single environment map. NEnv is composed of two different neural networks: A normalizing flow, able to map samples from uniform distributions to the probability density of the illumination, also providing their corresponding probabilities; and an implicit neural representation which compresses the environment map into an efficient differentiable function. The computation time of environment samples with NEnv is two orders of magnitude less than with traditional methods. NEnv makes no assumptions regarding the content (i.e. natural illumination), thus achieving higher generality than previous learning-based approaches. We share our implementation and a diverse dataset of trained neural environment maps, which can be easily integrated into existing rendering engines.Item An Interactive Tuning Method for Generator Networks Trained by GAN(The Eurographics Association, 2022) Zhou, Mengyuan; Yamaguchi, Yasushi; Cabiddu, Daniela; Schneider, Teseo; Allegra, Dario; Catalano, Chiara Eva; Cherchi, Gianmarco; Scateni, RiccardoThe recent studies on GAN achieved impressive results in image synthesis. However, they are still not so perfect that output images may contain unnatural regions. We propose a tuning method for generator networks trained by GAN to improve their results by interactively removing unexpected objects and textures or changing the object colors. Our method could find and ablate those units in the generator networks that are highly related to the specific regions or their colors. Compared to the related studies, our proposed method can tune pre-trained generator networks without relying on any additional information like segmentation-based networks. We built the interactive system based on our method, capable of tuning the generator networks to make the resulting images as expected. The experiments show that our method could remove only unexpected objects and textures. It could change the selected area color as well. The method also gives us some hints to discuss the properties of generator networks which layers and units are associated with objects, textures, or colors.Item Neural Intersection Function(The Eurographics Association, 2023) Fujieda, Shin; Kao, Chih Chen; Harada, Takahiro; Bikker, Jacco; Gribble, ChristiaanThe ray casting operation in the Monte Carlo ray tracing algorithm usually adopts a bounding volume hierarchy (BVH) to accelerate the process of finding intersections to evaluate visibility. However, its characteristics are irregular, with divergence in memory access and branch execution, so it cannot achieve maximum efficiency on GPUs. This paper proposes a novel Neural Intersection Function based on a multilayer perceptron whose core operation contains only dense matrix multiplication with predictable memory access. Our method is the first solution integrating the neural network-based approach and BVH-based ray tracing pipeline into one unified rendering framework. We can evaluate the visibility and occlusion of secondary rays without traversing the most irregular and time-consuming part of the BVH and thus accelerate ray casting. The experiments show the proposed method can reduce the secondary ray casting time for direct illumination by up to 35% compared to a BVH-based implementation and still preserve the image quality.Item Face Editing Using Part-Based Optimization of the Latent Space(The Eurographics Association and John Wiley & Sons Ltd., 2023) Aliari, Mohammad Amin; Beauchamp, Andre; Popa, Tiberiu; Paquette, Eric; Myszkowski, Karol; Niessner, MatthiasWe propose an approach for interactive 3D face editing based on deep generative models. Most of the current face modeling methods rely on linear methods and cannot express complex and non-linear deformations. In contrast to 3D morphable face models based on Principal Component Analysis (PCA), we introduce a novel architecture based on variational autoencoders. Our architecture has multiple encoders (one for each part of the face, such as the nose and mouth) which feed a single decoder. As a result, each sub-vector of the latent vector represents one part. We train our model with a novel loss function that further disentangles the space based on different parts of the face. The output of the network is a whole 3D face. Hence, unlike partbased PCA methods, our model learns to merge the parts intrinsically and does not require an additional merging process. To achieve interactive face modeling, we optimize for the latent variables given vertex positional constraints provided by a user. To avoid unwanted global changes elsewhere on the face, we only optimize the subset of the latent vector that corresponds to the part of the face being modified. Our editing optimization converges in less than a second. Our results show that the proposed approach supports a broader range of editing constraints and generates more realistic 3D faces.Item Cross-Shape Attention for Part Segmentation of 3D Point Clouds(The Eurographics Association and John Wiley & Sons Ltd., 2023) Loizou, Marios; Garg, Siddhant; Petrov, Dmitry; Averkiou, Melinos; Kalogerakis, Evangelos; Memari, Pooran; Solomon, JustinWe present a deep learning method that propagates point-wise feature representations across shapes within a collection for the purpose of 3D shape segmentation. We propose a cross-shape attention mechanism to enable interactions between a shape's point-wise features and those of other shapes. The mechanism assesses both the degree of interaction between points and also mediates feature propagation across shapes, improving the accuracy and consistency of the resulting point-wise feature representations for shape segmentation. Our method also proposes a shape retrieval measure to select suitable shapes for crossshape attention operations for each test shape. Our experiments demonstrate that our approach yields state-of-the-art results in the popular PartNet dataset.Item SurfNet: Learning Surface Representations via Graph Convolutional Network(The Eurographics Association and John Wiley & Sons Ltd., 2022) Han, Jun; Wang, Chaoli; Borgo, Rita; Marai, G. Elisabeta; Schreck, TobiasFor scientific visualization applications, understanding the structure of a single surface (e.g., stream surface, isosurface) and selecting representative surfaces play a crucial role. In response, we propose SurfNet, a graph-based deep learning approach for representing a surface locally at the node level and globally at the surface level. By treating surfaces as graphs, we leverage a graph convolutional network to learn node embedding on a surface. To make the learned embedding effective, we consider various pieces of information (e.g., position, normal, velocity) for network input and investigate multiple losses. Furthermore, we apply dimensionality reduction to transform the learned embeddings into 2D space for understanding and exploration. To demonstrate the effectiveness of SurfNet, we evaluate the embeddings in node clustering (node-level) and surface selection (surface-level) tasks. We compare SurfNet against state-of-the-art node embedding approaches and surface selection methods. We also demonstrate the superiority of SurfNet by comparing it against a spectral-based mesh segmentation approach. The results show that SurfNet can learn better representations at the node and surface levels with less training time and fewer training samples while generating comparable or better clustering and selection results.