Search Results

Now showing 1 - 10 of 88
  • Item
    Elasticity-based Clustering for Haptic Interaction with Heterogeneous Deformable Objects
    (The Eurographics Association, 2017) Gouis, Benoît Le; Marchal, Maud; Lécuyer, Anatole; Arnaldi, Bruno; Fabrice Jaillet and Florence Zara
    Physically-based simulation of heterogeneous objects remains computationally-demanding for many applications, especially when involving haptic interaction with virtual environments. In this paper, we introduce a novel multiresolution approach for haptic interaction with heterogeneous deformable objects. Our method called "Elasticity-based Clustering" is based on the clustering and aggregation of elasticity inside an object, in order to create large homogeneous volumes preserving important features of the initial distribution. The design of such large and homogeneous volumes improves the attribution of elasticity to the elements of the coarser geometry. We could successfully implement and test our approach within a complete and real-time haptic interaction pipeline compatible with consumer-grade haptic devices. We evaluated the performance of our approach on a large set of elasticity configurations using a perception-based quality criterion. Our results show that for 90% of studied cases our method can achieve a 6 times speedup in the simulation time with no theoretical perceptual difference.
  • Item
    Predictive Modeling of Material Appearance: From the Drawing Board to Interdisciplinary Applications
    (The Eurographics Association, 2024) Baranoski, Gladimir V. G.; Mania, Katerina; Artusi, Alessandro
    This tutorial addresses one of the fundamental and timely topics of computer graphics research, namely the predictive modeling of material appearance. Although this topic is deeply rooted in traditional areas like rendering and natural phenomena simulation, this tutorial is not limited to cover contents connected to these areas. It also closely looks into the scientific methodology employed in the development of predictive models of light and matter interactions. Given the widespread use of this methodology to find modeling solutions for problems within and outside computer graphics, its discussion from a ''behind the scenes'' perspective aims to underscore practical and far-reaching aspects of interdisciplinary research that are often overlooked in related publications. More specifically, this tutorial unveils constraints and pitfalls found in each of the key stages of the model development process, namely data collection, design and evaluation, and brings forward alternatives to tackle them effectively. Furthermore, besides being a central component of realistic image synthesis frameworks, predictive material appearance models have a scope of applications that can be extended far beyond the generation of believable images. For instance, they can be employed to accelerate the hypothesis generation and validation cycles of research across a wide range of fields, from biology and medicine to photonics and remote sensing, among others. These models can also be used to generate comprehensive in silico (computational) datasets to support the translation of knowledge advances in those fields to real-world applications (e.g., the noninvasive screening of medical conditions and the remote detection of environmental hazards). In fact, a number of them are already being used in physical and life sciences, notably to support investigations seeking to strengthen the current understanding about material appearance changes prompted by mechanisms which cannot be fully studied using standard ''wet'' experimental procedures. Accordingly, such interdisciplinary research initiatives are also discussed in this tutorial through selected case studies involving the use of predictive material appearance models to elucidate challenging scientific questions.
  • Item
    C++ Compile Time Polymorphism for Ray Tracing
    (The Eurographics Association, 2017) Zellmann, Stefan; Lang, Ulrich; Matthias Hullin and Reinhard Klein and Thomas Schultz and Angela Yao
    Reducing the amount of conditional branching instructions in innermost loops is crucial for high performance code on contemporary hardware architectures. In the context of ray tracing algorithms, typical examples for branching in inner loops are the decisions what type of primitive a ray should be tested against for intersection, or which BRDF implementation should be evaluated at a point of intersection. Runtime polymorphism, which is often used in those cases, can lead to highly expressive but poorly performing code. Optimization strategies often involve reduced feature sets (e.g. by simply supporting only a single geometric primitive type), or an upstream sorting step followed by multiple ray tracing kernel executions, which effectively places the branching instruction outside the inner loop. In this paper we propose C++ compile time polymorphism as an alternative optimization strategy that does on its own not reduce branching, but that can be used to write highly expressive code without sacrificing optimization potential such as early binding or inlining of tiny functions. We present an implementation with modern C++ that we integrate into a ray tracing template library. We evaluate our approach on CPU and GPU architectures.
  • Item
    Iso Photographic Rendering
    (The Eurographics Association, 2018) Porral, Philippe; Lucas, Laurent; Muller, Thomas; Randrianandrasana, Joël; Reinhard Klein and Holly Rushmeier
    In the field of computer graphics, the simulation of the visual appearance of materials requires an accurate computation of the light transport equation. Consequently, material models need to take into account various factors which may influence the spectral radiance perceived by the human eye. Though numerous relevant studies on the reflectance properties of materials have been conducted to date, environment maps used to simulate visual behaviors remain chiefly trichromatic. Whereas questions regarding the accurate characterization of natural lighting have been raised for some time, there are still no real sky environment maps that include both spectral radiance and polarization data. Under these conditions the simulations carried out are approximate and therefore insufficient for the industrial world where investment-sensitive decisions are often made based on these very calculations.
  • Item
    Sketching for Real-time Control of Crowd Simulations
    (The Eurographics Association, 2017) Gonzalez, Luis Rene Montana; Maddock, Steve; Tao Ruan Wan and Franck Vidal
    Crowd simulations are used in various fields such as entertainment, training systems and city planning. However, controlling the behaviour of the pedestrians typically involves tuning of the system parameters through trial and error, a time-consuming process relying on knowledge of a potentially complex parameter set. This paper presents an interactive graphical approach to control the simulation by sketching in the simulation environment. The user is able to sketch obstacles to block pedestrians and lines to force pedestrians to follow a specific path, as well as define spawn and exit locations for pedestrians. The obstacles and lines modify the underlying navigation representation and pedestrian trajectories are recalculated in real time. The FLAMEGPU framework is used for the simulation and the game engine Unreal is used for visualisation. We demonstrate the effectiveness of the approach using a range of scenarios, producing interactive editing and frame rates for tens of thousands of pedestrians. A comparison with the commercial software MassMotion is also given.
  • Item
    Downsampling and Storage of Pre-Computed Gradients for Volume Rendering
    (The Eurographics Association, 2017) Díaz-García, Jesús; Brunet, Pere; Navazo, Isabel; Vázquez, Pere-Pau; Fco. Javier Melero and Nuria Pelechano
    The way in which gradients are computed in volume datasets influences both the quality of the shading and the performance obtained in rendering algorithms. In particular, the visualization of coarse datasets in multi-resolution representations is affected when gradients are evaluated on-the-fly in the shader code by accessing neighbouring positions. This is not only a costly computation that compromises the performance of the visualization process, but also one that provides gradients of low quality that do not resemble the originals as much as desired because of the new topology of downsampled datasets. An obvious solution is to pre-compute the gradients and store them. Unfortunately, this originates two problems: First, the downsampling process, that is also prone to generate artifacts. Second, the limited bit size of storage itself causes the gradients to loss precision. In order to solve these issues, we propose a downsampling filter for pre-computed gradients that provides improved gradients that better match the originals such that the aforementioned artifacts disappear. Secondly, to address the storage problem, we present a method for the efficient storage of gradient directions that is able to minimize the minimum angle achieved among all representable vectors in a space of 3 bytes. We also provide several examples that show the advantages of the proposed approaches.
  • Item
    k-d Tree Construction Designed for Motion Blur
    (The Eurographics Association, 2017) Yang, Xin; Liu, Qi; Yin, Baocai; Zhang, Qiang; Zhou, Dongsheng; Wei, Xiaopeng; Matthias Zwicker and Pedro Sander
    We present a k-d tree construction algorithm designed to accelerate rendering of scenes with motion blur, in application scenarios where a k-d tree is either required or desired. Our associated data structure focuses on capturing incoherent motion within the nodes of a k-d tree and improves both data structure quality and efficiency over previous methods. At build-time stage, we tracks primitives with motion that is significantly distinct from other primitives within the node, guarantee valid node references and the correctness of the data structure via primitive duplication heuristic and propagation rules. Our experiments with this hierarchy show artifact-free motion-blur rendering using a k-d tree, and demonstrate improvements against a traditional BVH with interpolation and a MSBVH structure designed to handle moving primitives, particularly in render time.
  • Item
    A Unified Manifold Framework for Efficient BRDF Sampling based on Parametric Mixture Models
    (The Eurographics Association, 2018) Herholz, Sebastian; Elek, Oskar; Schindel, Jens; Křivánek, Jaroslav; Lensch, Hendrik P. A.; Jakob, Wenzel and Hachisuka, Toshiya
    Virtually all existing analytic BRDF models are built from multiple functional components (e.g., Fresnel term, normal distribution function, etc.). This makes accurate importance sampling of the full model challenging, and so current solutions only cover a subset of the model's components. This leads to sub-optimal or even invalid proposed directional samples, which can negatively impact the efficiency of light transport solvers based on Monte Carlo integration. To overcome this problem, we propose a unified BRDF sampling strategy based on parametric mixture models (PMMs). We show that for a given BRDF, the parameters of the associated PMM can be defined in smooth manifold spaces, which can be compactly represented using multivariate B-Splines. These manifolds are defined in the parameter space of the BRDF and allow for arbitrary, continuous queries of the PMM representation for varying BRDF parameters, which further enables importance sampling for spatially varying BRDFs. Our representation is not limited to analytic BRDF models, but can also be used for sampling measured BRDF data. The resulting manifold framework enables accurate and efficient BRDF importance sampling with very small approximation errors.
  • Item
    A Genetic Algorithm Based Heterogeneous Subsurface Scattering Representation
    (The Eurographics Association, 2020) Kurt, Murat; Klein, Reinhard and Rushmeier, Holly
    In this paper, we present a novel heterogeneous subsurface scattering (sss) representation, which is based on a combination of Singular Value Decomposition (SVD) and genetic optimization techniques. To find the best transformation that is applied to measured subsurface scattering data, we use a genetic optimization framework, which tries various transformations to the measured heterogeneous subsurface scattering data to find the fittest one. After we apply the best transformation, we compactly represent measured subsurface scattering data by separately applying the SVD per-color channel of the transformed profiles. In order to get a compact and accurate representation, we apply the SVD on the model errors, iteratively. We validate our approach on a range of optically thick, real-world translucent materials. It's shown that our genetic algorithm based heterogeneous subsurface scattering representation achieves greater visual accuracy than alternative techniques for the same level of compression.
  • Item
    Variable k-buffer using Importance Maps
    (The Eurographics Association, 2017) Vasilakis, Andreas-Alexandros; Vardis, Konstantinos; Papaioannou, Georgios; Moustakas, Konstantinos; Adrien Peytavie and Carles Bosch
    Successfully predicting visual attention can significantly improve many aspects of computer graphics and games. Despite the thorough investigation in this area, selective rendering has not addressed so far fragment visibility determination problems. To this end, we present the first ''selective multi-fragment rendering'' solution that alters the classic k-buffer construction procedure from a fixed-k to a variable-k per-pixel fragment allocation guided by an importance-driven model. Given a fixed memory budget, the idea is to allocate more fragment layers in parts of the image that need them most or contribute more significantly to the visual result. An importance map, dynamically estimated per frame based on several criteria, is used for the distribution of the fragment layers across the image. We illustrate the effectiveness and quality superiority of our approach in comparison to previous methods when performing order-independent transparency rendering in various, high depth-complexity, scenarios.