39-Issue 6
Permanent URI for this collection
Browse
Browsing 39-Issue 6 by Issue Date
Now showing 1 - 20 of 37
Results Per Page
Sort Options
Item Interactive Programming for Parametric CAD(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Mathur, Aman; Pirron, Marcus; Zufferey, Damien; Benes, Bedrich and Hauser, HelwigParametric computer‐aided design (CAD) enables description of a family of objects, wherein each valid combination of parameter values results in a different final form. Although Graphical User Interface (GUI)‐based CAD tools are significantly more popular, GUI operations do not carry a semantic description, and are therefore brittle with respect to changes in parameter values. Programmatic interfaces, on the other hand, are more robust due to an exact specification of how the operations are applied. However, programming is unintuitive and has a steep learning curve. In this work, we link the interactivity of GUI with the robustness of programming. Inspired by programme synthesis by example, our technique synthesizes code representative of selections made by users in a GUI interface. Through experiments, we demonstrate that our technique can synthesize relevant and robust sub‐programmes in a reasonable amount of time. A user study reveals that our interface offers significant improvements over a programming‐only interface.Item Hyperspectral Inverse Skinning(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Liu, Songrun; Tan, Jianchao; Deng, Zhigang; Gingold, Yotam; Benes, Bedrich and Hauser, HelwigIn example‐based inverse linear blend skinning (LBS), a collection of poses (e.g. animation frames) are given, and the goal is finding skinning weights and transformation matrices that closely reproduce the input. These poses may come from physical simulation, direct mesh editing, motion capture or another deformation rig. We provide a re‐formulation of inverse skinning as a problem in high‐dimensional Euclidean space. The transformation matrices applied to a vertex across all poses can be thought of as a point in high dimensions. We cast the inverse LBS problem as one of finding a tight‐fitting simplex around these points (a well‐studied problem in hyperspectral imaging). Although we do not observe transformation matrices directly, the 3D position of a vertex across all of its poses defines an affine subspace, or flat. We solve a ‘closest flat’ optimization problem to find points on these flats, and then compute a minimum‐volume enclosing simplex whose vertices are the transformation matrices and whose barycentric coordinates are the skinning weights. We are able to create LBS rigs with state‐of‐the‐art reconstruction error and state‐of‐the‐art compression ratios for mesh animation sequences. Our solution does not consider weight sparsity or the rigidity of recovered transformations. We include observations and insights into the closest flat problem. Its ideal solution and optimal LBS reconstruction error remain an open problem.Item Guide Me in Analysis: A Framework for Guidance Designers(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Ceneda, Davide; Andrienko, Natalia; Andrienko, Gennady; Gschwandtner, Theresia; Miksch, Silvia; Piccolotto, Nikolaus; Schreck, Tobias; Streit, Marc; Suschnigg, Josef; Tominski, Christian; Benes, Bedrich and Hauser, HelwigGuidance is an emerging topic in the field of visual analytics. Guidance can support users in pursuing their analytical goals more efficiently and help in making the analysis successful. However, it is not clear how guidance approaches should be designed and what specific factors should be considered for effective support. In this paper, we approach this problem from the perspective of guidance designers. We present a framework comprising requirements and a set of specific phases designers should go through when designing guidance for visual analytics. We relate this process with a set of quality criteria we aim to support with our framework, that are necessary for obtaining a suitable and effective guidance solution. To demonstrate the practical usability of our methodology, we apply our framework to the design of guidance in three analysis scenarios and a design walk‐through session. Moreover, we list the emerging challenges and report how the framework can be used to design guidance solutions that mitigate these issues.Item Non‐Uniform Subdivision Surfaces with Sharp Features(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Tian, Yufeng; Li, Xin; Chen, Falai; Benes, Bedrich and Hauser, HelwigSharp features are important characteristics in surface modelling. However, it is still a significantly difficult task to create complex sharp features for Non‐Uniform Rational B‐Splines compatible subdivision surfaces. Current non‐uniform subdivision methods produce sharp features generally by setting zero knot intervals, and these sharp features may have unpleasant visual effects. In this paper, we construct a non‐uniform subdivision scheme to create complex sharp features by extending the eigen‐polyhedron technique. The new scheme allows arbitrarily specifying sharp edges in the initial mesh and generates non‐uniform cubic B‐spline curves to represent the sharp features. Experimental results demonstrate that the present method can generate visually more pleasant sharp features than other existing approaches.Item Real‐Time Deformation with Coupled Cages and Skeletons(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Corda, F.; Thiery, J. M.; Livesu, M.; Puppo, E.; Boubekeur, T.; Scateni, R.; Benes, Bedrich and Hauser, HelwigSkeleton‐based and cage‐based deformation techniques represent the two most popular approaches to control real‐time deformations of digital shapes and are, to a vast extent, complementary to one another. Despite their complementary roles, high‐end modelling packages do not allow for seamless integration of such control structures, thus inducing a considerable burden on the user to maintain them synchronized. In this paper, we propose a framework that seamlessly combines rigging skeletons and deformation cages, granting artists with a real‐time deformation system that operates using any smooth combination of the two approaches. By coupling the deformation spaces of cages and skeletons, we access a much larger space, containing poses that are impossible to obtain by acting solely on a skeleton or a cage. Our method is oblivious to the specific techniques used to perform skinning and cage‐based deformation, securing it compatible with pre‐existing tools. We demonstrate the usefulness of our hybrid approach on a variety of examples.Item Image Morphing With Perceptual Constraints and STN Alignment(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Fish, N.; Zhang, R.; Perry, L.; Cohen‐Or, D.; Shechtman, E.; Barnes, C.; Benes, Bedrich and Hauser, HelwigIn image morphing, a sequence of plausible frames are synthesized and composited together to form a smooth transformation between given instances. Intermediates must remain faithful to the input, stand on their own as members of the set and maintain a well‐paced visual transition from one to the next. In this paper, we propose a conditional generative adversarial network (GAN) morphing framework operating on a pair of input images. The network is trained to synthesize frames corresponding to temporal samples along the transformation, and learns a proper shape prior that enhances the plausibility of intermediate frames. While individual frame plausibility is boosted by the adversarial setup, a special training protocol producing sequences of frames, combined with a perceptual similarity loss, promote smooth transformation over time. Explicit stating of correspondences is replaced with a grid‐based freeform deformation spatial transformer that predicts the geometric warp between the inputs, instituting the smooth geometric effect by bringing the shapes into an initial alignment. We provide comparisons to classic as well as latent space morphing techniques, and demonstrate that, given a set of images for self‐supervision, our network learns to generate visually pleasing morphing effects featuring believable in‐betweens, with robustness to changes in shape and texture, requiring no correspondence annotation.Item A Discriminative Multi‐Channel Facial Shape (MCFS) Representation and Feature Extraction for 3D Human Faces(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Gong, Xun; Li, Xin; Li, Tianrui; Liang, Yongqing; Benes, Bedrich and Hauser, HelwigBuilding an effective representation for 3D face geometry is essential for face analysis tasks, that is, landmark detection, face recognition and reconstruction. This paper proposes to use a Multi‐Channel Facial Shape (MCFS) representation that consists of depth, hand‐engineered feature and attention maps to construct a 3D facial descriptor. And, a multi‐channel adjustment mechanism, named filtered squeeze and reversed excitation (FSRE), is proposed to re‐organize MCFS data. To assign a suitable weight for each channel, FSRE is able to learn the importance of each layer automatically in the training phase. MCFS and FSRE blocks collaborate together effectively to build a robust 3D facial shape representation, which has an excellent discriminative ability. Extensive experimental results, testing on both high‐resolution and low‐resolution face datasets, show that facial features extracted by our framework outperform existing methods. This representation is stable against occlusions, data corruptions, expressions and pose variations. Also, unlike traditional methods of 3D face feature extraction, which always take minutes to create 3D features, our system can run in real time.Item From 2.5D Bas‐relief to 3D Portrait Model(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Zhang, Yu‐Wei; Wang, Wenping; Chen, Yanzhao; Liu, Hui; Ji, Zhongping; Zhang, Caiming; Benes, Bedrich and Hauser, HelwigIn contrast to 3D model that can be freely observed, p ortrait bas‐relief projects slightly from the background and is limited by fixed viewpoint. In this paper, we propose a novel method to reconstruct the underlying 3D shape from a single 2.5D bas‐relief, providing observers wider viewing perspectives. Our target is to make the reconstructed portrait has natural depth ordering and similar appearance to the input. To achieve this, we first use a 3D template face to fit the portrait. Then, we optimize the face shape by normal transfer and Poisson surface reconstruction. The hair and body regions are finally reconstructed and combined with the 3D face. From the resulting 3D shape, one can generate new reliefs with varying poses and thickness, freeing the input one from fixed view. A number of experimental results verify the effectiveness of our method.Item Real‐Time Glints Rendering With Pre‐Filtered Discrete Stochastic Microfacets(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Wang, Beibei; Deng, Hong; Holzschuch, Nicolas; Benes, Bedrich and Hauser, HelwigMany real‐life materials have a sparkling appearance. Examples include metallic paints, sparkling fabrics and snow. Simulating these sparkles is important for realistic rendering but expensive. As sparkles come from small shiny particles reflecting light into a specific direction, they are very challenging for illumination simulation. Existing approaches use a four‐dimensional hierarchy, searching for light‐reflecting particles simultaneously in space and direction. The approach is accurate, but extremely expensive. A separable model is much faster, but still not suitable for real‐time applications. The performance problem is even worse when illumination comes from environment maps, as they require either a large sample count per pixel or pre‐filtering. Pre‐filtering is incompatible with the existing sparkle models, due to the discrete multi‐scale representation. In this paper, we present a GPU‐friendly, pre‐filtered model for real‐time simulation of sparkles and glints. Our method simulates glints under both environment maps and point light sources in real time, with an added cost of just 10 ms per frame with full high‐definition resolution. Editing material properties requires extra computations but is still real time, with an added cost of 10 ms per frame.Item Issue Information(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Benes, Bedrich and Hauser, HelwigItem A Survey of Image Synthesis Methods for Visual Machine Learning(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Tsirikoglou, A.; Eilertsen, G.; Unger, J.; Benes, Bedrich and Hauser, HelwigImage synthesis designed for machine learning applications provides the means to efficiently generate large quantities of training data while controlling the generation process to provide the best distribution and content variety. With the demands of deep learning applications, synthetic data have the potential of becoming a vital component in the training pipeline. Over the last decade, a wide variety of training data generation methods has been demonstrated. The potential of future development calls to bring these together for comparison and categorization. This survey provides a comprehensive list of the existing image synthesis methods for visual machine learning. These are categorized in the context of image generation, using a taxonomy based on modelling and rendering, while a classification is also made concerning the computer vision applications they are used. We focus on the computer graphics aspects of the methods, to promote future image generation for machine learning. Finally, each method is assessed in terms of quality and reported performance, providing a hint on its expected learning potential. The report serves as a comprehensive reference, targeting both groups of the applications and data development sides. A list of all methods and papers reviewed herein can be found at .Item Exploring the Effects of Aggregation Choices on Untrained Visualization Users' Generalizations From Data(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Nguyen, F.; Qiao, X.; Heer, J.; Hullman, J.; Benes, Bedrich and Hauser, HelwigVisualization system designers must decide whether and how to aggregate data by default. Aggregating distributional information in a single summary mark like a mean or sum simplifies interpretation, but may lead untrained users to overlook distributional features. We ask, How are the conclusions drawn by untrained visualization users affected by aggregation strategy? We present two controlled experiments comparing generalizations of a population that untrained users made from visualizations that summarized either a 1000 record or 50 record sample with either single mean summary mark, a disaggregated view with one mark per observation or a view overlaying a mean summary mark atop a disaggregated view. While we observe no reliable effect of aggregation strategy on generalization accuracy at either sample size, users of purely disaggregated views were slightly less confident in their generalizations on average than users whose views show a single mean summary mark, and less likely to engage in dichotomous thinking about effects as either present or absent. Comparing results from 1000 record to 50 record data set, we see a considerably larger decrease in the number of generalizations produced and reported confidence in generalizations among viewers who saw disaggregated data relative to those who saw only mean summary marks.Item Making Sense of Scientific Simulation Ensembles With Semantic Interaction(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Dahshan, M.; Polys, N. F.; Jayne, R. S.; Pollyea, R. M.; Benes, Bedrich and Hauser, HelwigIn the study of complex physical systems, scientists use simulations to study the effects of different models and parameters. Seeking to understand the influence and relationships among multiple dimensions, they typically run many simulations and vary the initial conditions in what are known as ‘ensembles’. Ensembles are then a number of runs that are each multi‐dimensional and multi‐variate. In order to understand the connections between simulation parameters and patterns in the output data, we have been developing an approach to the visual analysis of scientific data that merges human expertise and intuition with machine learning and statistics. Our approach is manifested in a new visualization tool, GLEE (Graphically‐Linked Ensemble Explorer), that allows scientists to explore, search, filter and make sense of their ensembles. GLEE uses visualization and semantic interaction (SI) techniques to enable scientists to find similarities and differences between runs, find correlation(s) between different parameters and explore relations and correlations across and between different runs and parameters. Our approach supports scientists in selecting interesting subsets of runs in order to investigate and summarize the factors and statistics that show variations and consistencies across different runs. In this paper, we evaluate our tool with experts to understand its strengths and weaknesses for optimization and inverse problems.Item Multi‐Level Memory Structures for Simulating and Rendering Smoothed Particle Hydrodynamics(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Winchenbach, R.; Kolb, A.; Benes, Bedrich and Hauser, HelwigIn this paper, we present a novel hash map‐based sparse data structure for Smoothed Particle Hydrodynamics, which allows for efficient neighbourhood queries in spatially adaptive simulations as well as direct ray tracing of fluid surfaces. Neighbourhood queries for adaptive simulations are improved by using multiple independent data structures utilizing the same underlying self‐similar particle ordering, to significantly reduce non‐neighbourhood particle accesses. Direct ray tracing is performed using an auxiliary data structure, with constant memory consumption, which allows for efficient traversal of the hash map‐based data structure as well as efficient intersection tests. Overall, our proposed method significantly improves the performance of spatially adaptive fluid simulations and allows for direct ray tracing of the fluid surface with little memory overhead.Item Interactive Subsurface Scattering for Materials With High Scattering Distances(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Maisch, Sebastian; Ropinski, Timo; Benes, Bedrich and Hauser, HelwigExisting algorithms for rendering subsurface scattering in real time cannot deal well with scattering over longer distances. Kernels for image space algorithms become very large in these circumstances and separation does not work anymore, while geometry‐based algorithms cannot preserve details very well. We present a novel approach that deals with all these downsides. While for lower scattering distances, the advantages of geometry‐based methods are small, this is not the case anymore for high scattering distances (as we will show). Our proposed method takes advantage of the highly detailed results of image space algorithms and combines it with a geometry‐based method to add the essential scattering from sources not included in image space. Our algorithm does not require pre‐computation based on the scene's geometry, it can be applied to static and animated objects directly. Our method is able to provide results that come close to ray‐traced images which we will show in direct comparisons with images generated by PBRT. We will compare our results to state of the art techniques that are applicable in these scenarios and will show that we provide superior image quality while maintaining interactive rendering times.Item Preserving Shadow Silhouettes in Illumination‐Driven Mesh Reduction(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Bethe, F.; Jendersie, J.; Grosch, T.; Benes, Bedrich and Hauser, HelwigA main challenge for today's renderers is the ever‐growing size of 3D scenes, exceeding the capacity of typically available main memory. This especially holds true for graphics processing units (GPUs) which could otherwise be used to greatly reduce rendering time. A lot of the memory is spent on detailed geometry with mostly imperceptible influence on the final image, even in a global illumination context. Illumination‐driven mesh reduction, a Monte Carlo–based global illumination simulation, steers its mesh reduction towards areas with low visible contribution. While this works well for preserving high‐energy light paths such as caustics, it does have problems: First, objects casting shadows while not being visible themselves are not preserved, resulting in highly inaccurate shadows. Secondly, non‐transparent objects lack proper reduction guidance since there is no importance gradient on their backside, resulting in visible over‐simplification. We present a solution to these problems by extending illumination‐driven mesh reduction with occluder information, focusing on their silhouettes as well as combining it with commonly used error quadrics to preserve geometric features. Additionally, we demonstrate that the combined algorithm still supports iterative refinement of initially reduced geometry, resulting in an image visually similar to an unreduced rendering and enabling out‐of‐core operation.Item Accelerating Liquid Simulation With an Improved Data‐Driven Method(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Gao, Yang; Zhang, Quancheng; Li, Shuai; Hao, Aimin; Qin, Hong; Benes, Bedrich and Hauser, HelwigIn physics‐based liquid simulation for graphics applications, pressure projection consumes a significant amount of computational time and is frequently the bottleneck of the computational efficiency. How to rapidly apply the pressure projection and at the same time how to accurately capture the liquid geometry are always among the most popular topics in the current research trend in liquid simulations. In this paper, we incorporate an artificial neural network into the simulation pipeline for handling the tricky projection step for liquid animation. Compared with the previous neural‐network‐based works for gas flows, this paper advocates new advances in the composition of representative features as well as the loss functions in order to facilitate fluid simulation with free‐surface boundary. Specifically, we choose both the velocity and the level‐set function as the additional representation of the fluid states, which allows not only the motion but also the boundary position to be considered in the neural network solver. Meanwhile, we use the divergence error in the loss function to further emulate the lifelike behaviours of liquid. With these arrangements, our method could greatly accelerate the pressure projection step in liquid simulation, while maintaining fairly convincing visual results. Additionally, our neutral network performs well when being applied to new scene synthesis even with varied boundaries or scales.Item Stereo Inverse Brightness Modulation for Guidance in Dynamic Panorama Videos in Virtual Reality(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Grogorick, Steve; Tauscher, Jan‐Philipp; Heesen, Nikkel; Castillo, Susana; Magnor, Marcus; Benes, Bedrich and Hauser, HelwigThe peak of virtual reality offers new exciting possibilities for the creation of media content but also poses new challenges. Some areas of interest might be overlooked because the visual content fills up a large portion of viewers' visual field. Moreover, this content is available in 360° around the viewer, yielding locations completely out of sight, making, for example, recall or storytelling in cinematic Virtual Reality (VR) quite difficult.In this paper, we present an evaluation of Stereo Inverse Brightness Modulation for effective and subtle guidance of participants' attention while navigating dynamic virtual environments. The used technique exploits the binocular rivalry effect from human stereo vision and was previously shown to be effective in static environments. Moreover, we propose an extension of the method for successful guidance towards target locations outside the initial visual field.We conduct three perceptual studies, using 13 distinct panorama videos and two VR systems (a VR head mounted display and a fully immersive dome projection system), to investigate (1) general applicability to dynamic environments, (2) stimulus parameter and VR system influence, and (3) effectiveness of the proposed extension for out‐of‐sight targets. Our results prove the applicability of the method to dynamic environments while maintaining its unobtrusive appearance.Item Spherical Gaussian‐based Lightcuts for Glossy Interreflections(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Huo, Y.C.; Jin, S.H.; Liu, T.; Hua, W.; Wang, R.; Bao, H.J.; Benes, Bedrich and Hauser, HelwigIt is still challenging to render directional but non‐specular reflections in complex scenes. The SG‐based (Spherical Gaussian) many‐light framework provides a scalable solution but still requires a large number of glossy virtual lights to avoid spikes as well as reduce clamping errors. Directly gathering contributions from these glossy virtual lights to each pixel in a pairwise way is very inefficient. In this paper, we propose an adaptive algorithm with tighter error bounds to efficiently compute glossy interreflections from glossy virtual lights. This approach is an extension of the Lightcuts that builds hierarchies on both lights and pixels with new error bounds and new GPU‐based traversal methods between light and pixel hierarchies. Results demonstrate that our method is able to faithfully and efficiently compute glossy interreflections in scenes with highly glossy and spatial varying reflectance. Compared with the conventional Lightcuts method, our approach generates lightcuts with only one‐fourth to one‐fifth light nodes therefore exhibits better scalability. Additionally, after being implemented on GPU, our algorithms achieve a magnitude of faster performance than the previous method.Item Data‐Driven Facial Simulation(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Romeo, M.; Schvartzman, S. C.; Benes, Bedrich and Hauser, HelwigIn Visual Effects, the creation of realistic facial performances is still a challenge that the industry is trying to overcome. Blendshape deformation is used to reproduce the action of different groups of muscles, which produces realistic static results. However, this is not sufficient to generate believable and detailed facial performances of animated digital characters.To increase the realism of facial performances, it is possible to enhance standard facial rigs using physical simulation approaches. However, setting up a simulation rig and controlling material properties according to the performance is not an easy task and could take a lot of time and iterations to get it right.We present a workflow that allows us to generate an activation map for the fibres of a set of superficial patches we call . The pseudo‐muscles are automatically identified using k‐means to cluster the data from the blendshape targets in the animation rig and compute the direction of their contraction (direction of the pseudo‐muscle fibres). We use an Extended Position–Based Dynamics solver to add physical simulation to the facial animation, controlling the behaviour of simulation through the activation map. We show the results achieved using the proposed solution on two digital humans and one fantastic cartoon character, demonstrating that the identified pseudo‐muscles approximate facial anatomy and the simulation properties are properly controlled, increasing the realism while preserving the work of animators.