Volume 37 (2018)
Permanent URI for this community
Browse
Browsing Volume 37 (2018) by Issue Date
Now showing 1 - 20 of 253
Results Per Page
Sort Options
Item Deep Adaptive Sampling for Low Sample Count Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2018) Kuznetsov, Alexandr; Kalantari, Nima Khademi; Ramamoorthi, Ravi; Jakob, Wenzel and Hachisuka, ToshiyaRecently, deep learning approaches have proven successful at removing noise from Monte Carlo (MC) rendered images at extremely low sampling rates, e.g., 1-4 samples per pixel (spp). While these methods provide dramatic speedups, they operate on uniformly sampled MC rendered images. However, the full promise of low sample counts requires both adaptive sampling and reconstruction/denoising. Unfortunately, the traditional adaptive sampling techniques fail to handle the cases with low sampling rates, since there is insufficient information to reliably calculate their required features, such as variance and contrast. In this paper, we address this issue by proposing a deep learning approach for joint adaptive sampling and reconstruction of MC rendered images with extremely low sample counts. Our system consists of two convolutional neural networks (CNN), responsible for estimating the sampling map and denoising, separated by a renderer. Specifically, we first render a scene with one spp and then use the first CNN to estimate a sampling map, which is used to distribute three additional samples per pixel on average adaptively. We then filter the resulting render with the second CNN to produce the final denoised image. We train both networks by minimizing the error between the denoised and ground truth images on a set of training scenes. To use backpropagation for training both networks, we propose an approach to effectively compute the gradient of the renderer. We demonstrate that our approach produces better results compared to other sampling techniques. On average, our 4 spp renders are comparable to 6 spp from uniform sampling with deep learning-based denoising. Therefore, 50% more uniformly distributed samples are required to achieve equal quality without adaptive sampling.Item Strain Rate Dissipation for Elastic Deformations(The Eurographics Association and John Wiley & Sons Ltd., 2018) Sánchez-Banderas, Rosa M.; Otaduy, Miguel A.; Thuerey, Nils and Beeler, ThaboDamping determines how the energy in dynamic deformations is dissipated. The design of damping requires models where the behavior along deformation modes is easily controlled, while other motions are left unaffected. In this paper, we propose a framework for the design of damping using dissipation potentials formulated as functions of strain rate. We study simple parameterizations of the models, the application to continuum and discrete deformation models, and practical implications for implementation. We also study previous simple damping models, in particular we demonstrate limitations of Rayleigh damping. We analyze in detail the application of strain rate dissipation potentials to two highly different deformation models, StVK hyperlasticity and yarn-level cloth with sliding persistent contacts. These deformation models are representative of the range of applicability of the damping model.Item PencilArt: A Chromatic Penciling Style Generation Framework(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Gao, Chengying; Tang, Mengyue; Liang, Xiangguo; Su, Zhuo; Zou, Changqing; Chen, Min and Benes, BedrichNon‐photorealistic rendering has been an active area of research for decades whereas few of them concentrate on rendering chromatic penciling style. In this paper, we present a framework named as PencilArt for the chromatic penciling style generation from wild photographs. The structural outline and textured map for composing the chromatic pencil drawing are generated, respectively. First, we take advantage of deep neural network to produce the structural outline with proper intensity variation and conciseness. Next, for the textured map, we follow the painting process of artists to adjust the tone of input images to match the luminance histogram and pencil textures of real drawings. Eventually, we evaluate PencilArt via a series of comparisons to previous work, showing that our results better capture the main features of real chromatic pencil drawings and have an improved visual appearance.Non‐photorealistic rendering has been an active area of research for decades whereas few of them concentrate on rendering chromatic penciling style. In this paper, we present a framework named as PencilArt for the chromatic penciling style generation from wild photographs. The structural outline and textured map for composing the chromatic pencil drawing are generated, respectively. First, we take advantage of deep neural network to produce the structural outline with proper intensity variation and conciseness. Next, for the textured map, we follow the painting process of artists to adjust the tone of input images to match the luminance histogram and pencil textures of real drawings. Eventually, we evaluate PencilArt via a series of comparisons to previous work, showing that our results better capture the main features of real chromatic pencil drawings and have an improved visual appearance.Item Constructing 3D CSG Models from 3D Raw Point Clouds(The Eurographics Association and John Wiley & Sons Ltd., 2018) Wu, Qiaoyun; Xu, Kai; Wang, Jun; Ju, Tao and Vaxman, AmirThe Constructive Solid Geometry (CSG) tree, encoding the generative process of an object by a recursive compositional structure of bounded primitives, constitutes an important structural representation of 3D objects. Therefore, automatically recovering such a compositional structure from the raw point cloud of an object represents a high-level reverse engineering problem, finding applications from structure and functionality analysis to creative redesign. We propose an effective method to construct CSG models and trees directly over raw point clouds. Specifically, a large number of hypothetical bounded primitive candidates are first extracted from raw scans, followed by a carefully designed pruning strategy. We then choose to approximate the target CSG model by the combination of a subset of these candidates with corresponding Boolean operations using a binary optimization technique, from which the corresponding CSG tree can be derived. Our method attempts to consider the minimal description length concept in the point cloud analysis setting, where the objective function is designed to minimize the construction error and complexity simultaneously. We demonstrate the effectiveness and robustness of our method with extensive experiments on real scan data with various complexities and styles.Item Self-similarity Analysis for Motion Capture Cleaning(The Eurographics Association and John Wiley & Sons Ltd., 2018) Aristidou, Andreas; Cohen-Or, Daniel; Hodgins, Jessica K.; Shamir, Ariel; Gutierrez, Diego and Sheffer, AllaMotion capture sequences may contain erroneous data, especially when the motion is complex or performers are interacting closely and occlusions are frequent. Common practice is to have specialists visually detect the abnormalities and fix them manually. In this paper, we present a method to automatically analyze and fix motion capture sequences by using self-similarity analysis. The premise of this work is that human motion data has a high-degree of self-similarity. Therefore, given enough motion data, erroneous motions are distinct when compared to other motions. We utilize motion-words that consist of short sequences of transformations of groups of joints around a given motion frame. We search for the K-nearest neighbors (KNN) set of each word using dynamic time warping and use it to detect and fix erroneous motions automatically. We demonstrate the effectiveness of our method in various examples, and evaluate by comparing to alternative methods and to manual cleaning.Item CPU–GPU Parallel Framework for Real‐Time Interactive Cutting of Adaptive Octree‐Based Deformable Objects(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Jia, Shiyu; Zhang, Weizhong; Yu, Xiaokang; Pan, Zhenkuan; Chen, Min and Benes, BedrichA software framework taking advantage of parallel processing capabilities of CPUs and GPUs is designed for the real‐time interactive cutting simulation of deformable objects. Deformable objects are modelled as voxels connected by links. The voxels are embedded in an octree mesh used for deformation. Cutting is performed by disconnecting links swept by the cutting tool and then adaptively refining octree elements near the cutting tool trajectory. A surface mesh used for visual display is reconstructed from disconnected links using the dual contour method. Spatial hashing of the octree mesh and topology‐aware interpolation of distance field are used for collision. Our framework uses a novel GPU implementation for inter‐object collision and object self collision, while tool‐object collision, cutting and deformation are assigned to CPU, using multiple threads whenever possible. A novel method that splits cutting operations into four independent tasks running in parallel is designed. Our framework also performs data transfers between CPU and GPU simultaneously with other tasks to reduce their impact on performances. Simulation tests show that when compared to three‐threaded CPU implementations, our GPU accelerated collision is 53–160% faster; and the overall simulation frame rate is 47–98% faster.A software framework taking advantage of parallel processing capabilities of CPUs and GPUs is designed for real‐time interactive cutting simulation of adaptive octree‐based deformable objects. The framework uses a novel GPU implementation for inter‐object collision and object self collision, while other tasks are assigned to CPU, using multiple threads whenever possible. A novel method that splits cutting operations into 4 independent tasks running in parallel is designed. Simulation tests show that when compared to 3‐threaded CPU implementations, our GPU accelerated collision is 53% to 160% faster; and the overall simulation frame rate is 47% to 98% faster.Item Modeling Fonts in Context: Font Prediction on Web Designs(The Eurographics Association and John Wiley & Sons Ltd., 2018) Zhao, Nanxuan; Cao, Ying; Lau, Rynson W. H.; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesWeb designers often carefully select fonts to fit the context of a web design to make the design look aesthetically pleasing and effective in communication. However, selecting proper fonts for a web design is a tedious and time-consuming task, as each font has many properties, such as font face, color, and size, resulting in a very large search space. In this paper, we aim to model fonts in context, by studying a novel and challenging problem of predicting fonts that match a given web design. To this end, we propose a novel, multi-task deep neural network to jointly predict font face, color and size for each text element on a web design, by considering multi-scale visual features and semantic tags of the web design. To train our model, we have collected a CTXFont dataset, which consists of 1k professional web designs, with labeled font properties. Experiments show that our model outperforms the baseline methods, achieving promising qualitative and quantitative results on the font selection task. We also demonstrate the usefulness of our method in a font selection task via a user study.Item SetCoLa: High-Level Constraints for Graph Layout(The Eurographics Association and John Wiley & Sons Ltd., 2018) Hoffswell, Jane; Borning, Alan; Heer, Jeffrey; Jeffrey Heer and Heike Leitte and Timo RopinskiConstraints enable flexible graph layout by combining the ease of automatic layout with customizations for a particular domain. However, constraint-based layout often requires many individual constraints defined over specific nodes and node pairs. In addition to the effort of writing and maintaining a large number of similar constraints, such constraints are specific to the particular graph and thus cannot generalize to other graphs in the same domain. To facilitate the specification of customized and generalizable constraint layouts, we contribute SetCoLa: a domain-specific language for specifying high-level constraints relative to properties of the backing data. Users identify node sets based on data or graph properties and apply high-level constraints within each set. Applying constraints to node sets rather than individual nodes reduces specification effort and facilitates reapplication of customized layouts across distinct graphs. We demonstrate the conciseness, generalizability, and expressiveness of SetCoLa on a series of real-world examples from ecological networks, biological systems, and social networks.Item Packable Springs(The Eurographics Association and John Wiley & Sons Ltd., 2018) Wolff, Katja; Poranne, Roi; Glauser, Oliver; Sorkine-Hornung, Olga; Gutierrez, Diego and Sheffer, AllaLaser cutting is an appealing fabrication process due to the low cost of materials and extremely fast fabrication. However, the design space afforded by laser cutting is limited, since only flat panels can be cut. Previous methods for manufacturing from flat sheets usually roughly approximate 3D objects by polyhedrons or cross sections. Computational design methods for connecting, interlocking, or folding several laser cut panels have been introduced; to obtain a good approximation, these methods require numerous parts and long assembly times. In this paper, we propose a radically different approach: Our approximation is based on cutting thin, planar spirals out of flat panels. When such spirals are pulled apart, they take on the shape of a 3D spring whose contours are similar to the input object. We devise an optimization problem that aims to minimize the number of required parts, thus reducing costs and fabrication time, while at the same time ensuring that the resulting spring mimics the shape of the original object. In addition to rapid fabrication and assembly, our method enables compact packaging and storage as flat parts. We also demonstrate its use for creating armatures for sculptures and moulds for filling, with potential applications in architecture or construction.Item Hair Modeling and Simulation by Style(The Eurographics Association and John Wiley & Sons Ltd., 2018) Jung, Seunghwan; Lee, Sung-Hee; Gutierrez, Diego and Sheffer, AllaAs the deformation behaviors of hair strands vary greatly depending on the hairstyle, the computational cost and accuracy of hair movement simulations can be significantly improved by applying simulation methods specific to a certain style. This paper makes two contributions with regard to the simulation of various hair styles. First, we propose a novel method to reconstruct simulatable hair strands from hair meshes created by artists. Manually created hair meshes consist of numerous mesh patches, and the strand reconstruction process is challenged by the absence of connectivity information among the patches for the same strand and the omission of hidden parts of strands due to the manual creation process. To this end, we develop a two-stage spectral clustering method for estimating the degree of connectivity among patches and a strand-growing method that preserves hairstyles. Next, we develop a hairstyle classification method for style-specific simulations. In particular, we propose a set of features for efficient classifications and show that classifiers trained with the proposed features have higher accuracy than those trained with naive features. Our method applies efficient simulation methods according to the hairstyle without specific user input, and thus is favorable for real-time simulation.Item Feature Generation for Adaptive Gradient-Domain Path Tracing(The Eurographics Association and John Wiley & Sons Ltd., 2018) Back, Jonghee; Yoon, Sung-Eui; Moon, Bochang; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesIn this paper, we propose a new technique to incorporate recent adaptive rendering approaches built upon local regression theory into a gradient-domain path tracing framework, in order to achieve high-quality rendering results. Our method aims to reduce random artifacts introduced by random sampling on image colors and gradients. Our high-level approach is to identify a feature image from noisy gradients, and pass the image to an existing local regression based adaptive method so that adaptive sampling and reconstruction using our feature can boost the performance of gradient-domain rendering. To fulfill our idea, we derive an ideal feature in the form of image gradients and propose an estimation process for the ideal feature in the presence of noise in image gradients. We demonstrate that our integrated adaptive solution leads to performance improvement for a gradient-domain path tracer, by seamlessly incorporating recent adaptive sampling and reconstruction strategies through our estimated feature.Item A General Illumination Model for Molecular Visualization(The Eurographics Association and John Wiley & Sons Ltd., 2018) Casajus, Pedro Hermosilla; Vázquez, Pere-Pau; Vinacua, Àlvar; Ropinski, Timo; Jeffrey Heer and Heike Leitte and Timo RopinskiSeveral visual representations have been developed over the years to visualize molecular structures, and to enable a better understanding of their underlying chemical processes. Today, the most frequently used atom-based representations are the Space-filling, the Solvent Excluded Surface, the Balls-and-Sticks, and the Licorice models. While each of these representations has its individual benefits, when applied to large-scale models spatial arrangements can be difficult to interpret when employing current visualization techniques. In the past it has been shown that global illumination techniques improve the perception of molecular visualizations; unfortunately existing approaches are tailored towards a single visual representation. We propose a general illumination model for molecular visualization that is valid for different representations. With our illumination model, it becomes possible, for the first time, to achieve consistent illumination among all atom-based molecular representations. The proposed model can be further evaluated in real-time, as it employs an analytical solution to simulate diffuse light interactions between objects. To be able to derive such a solution for the rather complicated and diverse visual representations, we propose the use of regression analysis together with adapted parameter sampling strategies as well as shape parametrization guided sampling, which are applied to the geometric building blocks of the targeted visual representations. We will discuss the proposed sampling strategies, the derived illumination model, and demonstrate its capabilities when visualizing several dynamic molecules.Item Towards User-Centered Active Learning Algorithms(The Eurographics Association and John Wiley & Sons Ltd., 2018) Bernard, Jürgen; Zeppelzauer, Matthias; Lehmann, Markus; Müller, Martin; Sedlmair, Michael; Jeffrey Heer and Heike Leitte and Timo RopinskiThe labeling of data sets is a time-consuming task, which is, however, an important prerequisite for machine learning and visual analytics. Visual-interactive labeling (VIAL) provides users an active role in the process of labeling, with the goal to combine the potentials of humans and machines to make labeling more efficient. Recent experiments showed that users apply different strategies when selecting instances for labeling with visual-interactive interfaces. In this paper, we contribute a systematic quantitative analysis of such user strategies. We identify computational building blocks of user strategies, formalize them, and investigate their potentials for different machine learning tasks in systematic experiments. The core insights of our experiments are as follows. First, we identified that particular user strategies can be used to considerably mitigate the bootstrap (cold start) problem in early labeling phases. Second, we observed that they have the potential to outperform existing active learning strategies in later phases. Third, we analyzed the identified core building blocks, which can serve as the basis for novel selection strategies. Overall, we observed that data-based user strategies (clusters, dense areas) work considerably well in early phases, while model-based user strategies (e.g., class separation) perform better during later phases. The insights gained from this work can be applied to develop novel active learning approaches as well as to better guide users in visual interactive labeling.Item Efficient BVH-based Collision Detection Scheme with Ordering and Restructuring(The Eurographics Association and John Wiley & Sons Ltd., 2018) Wang, Xinlei; Tang, Min; Manocha, Dinesh; Tong, Ruofeng; Gutierrez, Diego and Sheffer, AllaBounding volume hierarchy (BVH) has been widely adopted as the acceleration structure in broad-phase collision detection. Previous state-of-the-art BVH-based collision detection approaches exploited the spatio-temporal coherence of simulations by maintaining a bounding volume test tree (BVTT) front. A major drawback of these algorithms is that large deformations in the scenes decrease culling efficiency and slow down collision queries. Moreover, for front-based methods, the inefficient caching on GPU caused by the arbitrary layout of BVH and BVTT front nodes becomes a critical performance issue. We present a fast and robust BVH-based collision detection scheme on GPU that addresses the above problems by ordering and restructuring BVHs and BVTT fronts. Our techniques are based on the use of histogram sort and an auxiliary structure BVTT front log, through which we analyze the dynamic status of BVTT front and BVH quality. Our approach efficiently handles inter- and intra-object collisions and performs especially well in simulations where there is considerable spatio-temporal coherence. The benchmark results demonstrate that our approach is significantly faster than the previous BVH-based method, and also outperforms other state-of-the-art spatial subdivision schemes in terms of speed.Item Inverse Kinematics Techniques in Computer Graphics: A Survey(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Aristidou, A.; Lasenby, J.; Chrysanthou, Y.; Shamir, A.; Chen, Min and Benes, BedrichInverse kinematics (IK) is the use of kinematic equations to determine the joint parameters of a manipulator so that the end effector moves to a desired position; IK can be applied in many areas, including robotics, engineering, computer graphics and video games. In this survey, we present a comprehensive review of the IK problem and the solutions developed over the years from the computer graphics point of view. The paper starts with the definition of forward and IK, their mathematical formulations and explains how to distinguish the unsolvable cases, indicating when a solution is available. The IK literature in this report is divided into four main categories: the , the , the and the methods. A timeline illustrating key methods is presented, explaining how the IK approaches have progressed over the years. The most popular IK methods are discussed with regard to their performance, computational cost and the smoothness of their resulting postures, while we suggest which IK family of solvers is best suited for particular problems. Finally, we indicate the limitations of the current IK methodologies and propose future research directions.Inverse kinematics (IK) is the use of kinematic equations to determine the joint parameters of a manipulator so that the end effector moves to a desired position; IK can be applied in many areas, including robotics, engineering, computer graphics and video games. In this survey, we present a comprehensive review of the IK problem and the solutions developed over the years from the computer graphics point of view. The paper starts with the definition of forward and IK, their mathematical formulations and explains how to distinguish the unsolvable cases, indicating when a solution is available.Item Field-Aligned and Lattice-Guided Tetrahedral Meshing(The Eurographics Association and John Wiley & Sons Ltd., 2018) Ni, Saifeng; Zhong, Zichun; Huang, Jin; Wang, Wenping; Guo, Xiaohu; Ju, Tao and Vaxman, AmirWe present a particle-based approach to generate field-aligned tetrahedral meshes, guided by cubic lattices, including BCC and FCC lattices. Given a volumetric domain with an input frame field and a user-specified edge length for the cubic lattice, we optimize a set of particles to form the desired lattice pattern. A Gaussian Hole Kernel associated with each particle is constructed. Minimizing the sum of kernels of all particles encourages the particles to form a desired layout, e.g., field-aligned BCC and FCC. The resulting set of particles can be connected to yield a high quality field-aligned tetrahedral mesh. As demonstrated by experiments and comparisons, the field-aligned and lattice-guided approach can produce higher quality isotropic and anisotropic tetrahedral meshes than state-of-the-art meshing methods.Item EuroVis 2018: Frontmatter(The Eurographics Association and John Wiley & Sons Ltd., 2018) Heer, Jeffrey; Leitte, Heike; Ropinski, Timo; Jeffrey Heer and Heike Leitte and Timo RopinskiItem Visual Analysis of Protein-ligand Interactions(The Eurographics Association and John Wiley & Sons Ltd., 2018) Vázquez, Pere-Pau; Casajus, Pedro Hermosilla; Guallar, Victor; Estrada, Jorge; Vinacua, Àlvar; Jeffrey Heer and Heike Leitte and Timo RopinskiThe analysis of protein-ligand interactions is complex because of the many factors at play. Most current methods for visual analysis provide this information in the form of simple 2D plots, which, besides being quite space hungry, often encode a low number of different properties. In this paper we present a system for compact 2D visualization of molecular simulations. It purposely omits most spatial information and presents physical information associated to single molecular components and their pairwise interactions through a set of 2D InfoVis tools with coordinated views, suitable interaction, and focus+context techniques to analyze large amounts of data. The system provides a wide range of motifs for elements such as protein secondary structures or hydrogen bond networks, and a set of tools for their interactive inspection, both for a single simulation and for comparing two different simulations. As a result, the analysis of protein-ligand interactions of Molecular Simulation trajectories is greatly facilitated.Item Wavejets: A Local Frequency Framework for Shape Details Amplification(The Eurographics Association and John Wiley & Sons Ltd., 2018) Béarzi, Yohann; Digne, Julie; Chaine, Raphaëlle; Gutierrez, Diego and Sheffer, AllaDetail enhancement is a well-studied area of 3D rendering and image processing, which has few equivalents for 3D shape processing. To enhance details, one needs an efficient analysis tool to express the local surface dynamics.We introduceWavejets, a new function basis for locally decomposing a shape expressed over the local tangent plane, by considering both angular oscillations of the surface around each point and a radial polynomial.We link theWavejets coefficients to surface derivatives and give theoretical guarantees for their precision and stability with respect to an approximate tangent plane. The coefficients can be used for shape details amplification, to enhance, invert or distort them, by operating either on the surface point positions or on the normals. From a practical point of view, we derive an efficient way of estimating Wavejets on point sets and demonstrate experimentally the amplification results with respect to noise or basis truncation.Item Application‐Specific Tone Mapping Via Genetic Programming(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Debattista, K.; Chen, Min and Benes, BedrichHigh dynamic range (HDR) imagery permits the manipulation of real‐world data distinct from the limitations of the traditional, low dynamic range (LDR), content. The process of retargeting HDR content to traditional LDR imagery via tone mapping operators (TMOs) is useful for visualizing HDR content on traditional displays, supporting backwards‐compatible HDR compression and, more recently, is being frequently used for input into a wide variety of computer vision applications. This work presents the automatic generation of TMOs for specific applications via the evolutionary computing method of genetic programming (GP). A straightforward, generic GP method that generates TMOs for a given fitness function and HDR content is presented. Its efficacy is demonstrated in the context of three applications: Visualization of HDR content on LDR displays, feature mapping and compression. For these applications, results show good performance for the generated TMOs when compared to traditional methods. Furthermore, they demonstrate that the method is generalizable and could be used across various applications that require TMOs but for which dedicated successful TMOs have not yet been discovered. High dynamic range (HDR) imagery permits the manipulation of real‐world data distinct from the limitations of the traditional, low dynamic range (LDR), content. The process of retargeting HDR content to traditional LDR imagery via tone mapping operators (TMOs) is useful for visualizing HDR content on traditional displays, supporting backwards‐compatible HDR compression and, more recently, is being frequently used for input into a wide variety of computer vision applications. This work presents the automatic generation of TMOs for specific applications via the evolutionary computing method of genetic programming (GP).