Search Results

Now showing 1 - 10 of 127
  • Item
    Extracting Microfacet-based BRDF Parameters from Arbitrary Materials with Power Iterations
    (The Eurographics Association and John Wiley & Sons Ltd., 2015) Dupuy, Jonathan; Heitz, Eric; Iehl, Jean-Claude; Poulin, Pierre; Ostromoukhov, Victor; Jaakko Lehtinen and Derek Nowrouzezahrai
    We introduce a novel fitting procedure that takes as input an arbitrary material, possibly anisotropic, and automatically converts it to a microfacet BRDF. Our algorithm is based on the property that the distribution of microfacets may be retrieved by solving an eigenvector problem that is built solely from backscattering samples. We show that the eigenvector associated to the largest eigenvalue is always the only solution to this problem, and compute it using the power iteration method. This approach is straightforward to implement, much faster to compute, and considerably more robust than solutions based on nonlinear optimizations. In addition, we provide simple conversion procedures of our fits into both Beckmann and GGX roughness parameters, and discuss the advantages of microfacet slope space to make our fits editable. We apply our method to measured materials from two large databases that include anisotropic materials, and demonstrate the benefits of spatially varying roughness on texture mapped geometric models.
  • Item
    Geometry and Attribute Compression for Voxel Scenes
    (The Eurographics Association and John Wiley & Sons Ltd., 2016) Dado, Bas; Kol, Timothy R.; Bauszat, Pablo; Thiery, Jean-Marc; Eisemann, Elmar; Joaquim Jorge and Ming Lin
    Voxel-based approaches are today's standard to encode volume data. Recently, directed acyclic graphs (DAGs) were successfully used for compressing sparse voxel scenes as well, but they are restricted to a single bit of (geometry) information per voxel. We present a method to compress arbitrary data, such as colors, normals, or reflectance information. By decoupling geometry and voxel data via a novel mapping scheme, we are able to apply the DAG principle to encode the topology, while using a palette-based compression for the voxel attributes, leading to a drastic memory reduction. Our method outperforms existing state-of-the-art techniques and is well-suited for GPU architectures. We achieve real-time performance on commodity hardware for colored scenes with up to 17 hierarchical levels (a 128K3 voxel resolution), which are stored fully in core.
  • Item
    MCFTLE: Monte Carlo Rendering of Finite-Time Lyapunov Exponent Fields
    (The Eurographics Association and John Wiley & Sons Ltd., 2016) Günther, Tobias; Kuhn, Alexander; Theisel, Holger; Kwan-Liu Ma and Giuseppe Santucci and Jarke van Wijk
    Traditionally, Lagrangian fields such as finite-time Lyapunov exponents (FTLE) are precomputed on a discrete grid and are ray casted afterwards. This, however, introduces both grid discretization errors and sampling errors during ray marching. In this work, we apply a progressive, view-dependent Monte Carlo-based approach for the visualization of such Lagrangian fields in time-dependent flows. Our approach avoids grid discretization and ray marching errors completely, is consistent, and has a low memory consumption. The system provides noisy previews that converge over time to an accurate high-quality visualization. Compared to traditional approaches, the proposed system avoids explicitly predefined fieldline seeding structures, and uses a Monte Carlo sampling strategy named Woodcock tracking to distribute samples along the view ray. An acceleration of this sampling strategy requires local upper bounds for the FTLE values, which we progressively acquire during the rendering. Our approach is tailored for high-quality visualizations of complex FTLE fields and is guaranteed to faithfully represent detailed ridge surface structures as indicators for Lagrangian coherent structures (LCS). We demonstrate the effectiveness of our approach by using a set of analytic test cases and real-world numerical simulations.
  • Item
    Similarity Voting based Viewpoint Selection for Volumes
    (The Eurographics Association and John Wiley & Sons Ltd., 2016) Tao, Yubo; Wang, Qirui; Chen, Wei; Wu, Yingcai; Lin, Hai; Kwan-Liu Ma and Giuseppe Santucci and Jarke van Wijk
    Previous viewpoint selection methods in volume visualization are generally based on some deterministic measures of viewpoint quality. However, they may not express the familiarity and aesthetic sense of users for features of interest. In this paper, we propose an image-based viewpoint selection model to learn how visualization experts choose representative viewpoints for volumes with similar features. For a given volume, we first collect images with similar features, and these images reflect the viewpoint preferences of the experts when visualizing these features. Each collected image tallies votes to the viewpoints with the best matching based on an image similarity measure, which evaluates the spatial shape and appearance similarity between the collected image and the rendered image from the viewpoint. The optimal viewpoint is the one with the most votes from the collected images, that is, the viewpoint chosen by most visualization experts for similar features. We performed experiments on various volumes available in volume visualization, and made comparisons with traditional viewpoint selection methods. The results demonstrate that our model can select more canonical viewpoints, which are consistent with human perception.
  • Item
    Nonparametric Models for Uncertainty Visualization
    (The Eurographics Association and Blackwell Publishing Ltd., 2013) Pöthkow, Kai; Hege, Hans-Christian; B. Preim, P. Rheingans, and H. Theisel
    An uncertain (scalar, vector, tensor) field is usually perceived as a discrete random field with a priori unknown probability distributions. To compute derived probabilities, e.g. for the occurrence of certain features, an appropriate probabilistic model has to be selected. The majority of previous approaches in uncertainty visualization were restricted to Gaussian fields. In this paper we extend these approaches to nonparametric models, which are much more flexible, as they can represent various types of distributions, including multimodal and skewed ones. We present three examples of nonparametric representations: (a) empirical distributions, (b) histograms and (c) kernel density estimates (KDE). While the first is a direct representation of the ensemble data, the latter two use reconstructed probability density functions of continuous random variables. For KDE we propose an approach to compute valid consistent marginal distributions and to efficiently capture correlations using a principal component transformation. Furthermore, we use automatic bandwidth selection, obtaining a model for probabilistic local feature extraction. The methods are demonstrated by computing probabilities of level crossings, critical points and vortex cores in simulated biofluid dynamics and climate data.
  • Item
    Visibility Equalizer Cutaway Visualization of Mesoscopic Biological Models
    (The Eurographics Association and John Wiley & Sons Ltd., 2016) Muzic, Mathieu Le; Mindek, Peter; Sorger, Johannes; Autin, Ludovic; Goodsell, David S.; Viola, Ivan; Kwan-Liu Ma and Giuseppe Santucci and Jarke van Wijk
    In scientific illustrations and visualization, cutaway views are often employed as an effective technique for occlusion management in densely packed scenes.We propose a novel method for authoring cutaway illustrations of mesoscopic biological models. In contrast to the existing cutaway algorithms, we take advantage of the specific nature of the biological models. These models consist of thousands of instances with a comparably smaller number of different types. Our method constitutes a two stage process. In the first step, clipping objects are placed in the scene, creating a cutaway visualization of the model. During this process, a hierarchical list of stacked bars inform the user about the instance visibility distribution of each individual molecular type in the scene. In the second step, the visibility of each molecular type is fine-tuned through these bars, which at this point act as interactive visibility equalizers. An evaluation of our technique with domain experts confirmed that our equalizer-based approach for visibility specification is valuable and effective for both, scientific and educational purposes.
  • Item
    Self Tuning Texture Optimization
    (The Eurographics Association and John Wiley & Sons Ltd., 2015) Kaspar, Alexandre; Neubert, Boris; Lischinski, Dani; Pauly, Mark; Kopf, Johannes; Olga Sorkine-Hornung and Michael Wimmer
    The goal of example-based texture synthesis methods is to generate arbitrarily large textures from limited exemplars in order to fit the exact dimensions and resolution required for a specific modeling task. The challenge is to faithfully capture all of the visual characteristics of the exemplar texture, without introducing obvious repetitions or unnatural looking visual elements. While existing non-parametric synthesis methods have made remarkable progress towards this goal, most such methods have been demonstrated only on relatively low-resolution exemplars. Real-world high resolution textures often contain texture details at multiple scales, which these methods have difficulty reproducing faithfully. In this work, we present a new general-purpose and fully automatic selftuning non-parametric texture synthesis method that extends Texture Optimization by introducing several key improvements that result in superior synthesis ability. Our method is able to self-tune its various parameters and weights and focuses on addressing three challenging aspects of texture synthesis: (i) irregular large scale structures are faithfully reproduced through the use of automatically generated and weighted guidance channels; (ii) repetition and smoothing of texture patches is avoided by new spatial uniformity constraints; (iii) a smart initialization strategy is used in order to improve the synthesis of regular and near-regular textures, without affecting textures that do not exhibit regularities. We demonstrate the versatility and robustness of our completely automatic approach on a variety of challenging high-resolution texture exemplars.
  • Item
    Visualizing Time-Specific Hurricane Predictions, with Uncertainty, from Storm Path Ensembles
    (The Eurographics Association and John Wiley & Sons Ltd., 2015) Liu, Le; Mirzargar, Mahsa; Kirby, Robert M.; Whitaker, Ross; House, Donald H.; H. Carr, K.-L. Ma, and G. Santucci
    The U.S. National Hurricane Center (NHC) issues advisories every six hours during the life of a hurricane. These advisories describe the current state of the storm, and its predicted path, size, and wind speed over the next five days. However, from these data alone, the question ''What is the likelihood that the storm will hit Houston with hurricane strength winds between 12:00 and 14:00 on Saturday?'' cannot be directly answered. To address this issue, the NHC has recently begun making an ensemble of potential storm paths available as part of each storm advisory. Since each path is parameterized by time, predicted values such as wind speed associated with the path can be inferred for a specific time period by analyzing the statistics of the ensemble. This paper proposes an approach for generating smooth scalar fields from such a predicted storm path ensemble, allowing the user to examine the predicted state of the storm at any chosen time. As a demonstration task, we show how our approach can be used to support a visualization tool, allowing the user to display predicted storm position - including its uncertainty - at any time in the forecast. In our approach, we estimate the likelihood of hurricane risk for a fixed time at any geospatial location by interpolating simplicial depth values in the path ensemble. Adaptivelysized radial basis functions are used to carry out the interpolation. Finally, geometric fitting is used to produce a simple graphical visualization of this likelihood. We also employ a non-linear filter, in time, to assure frame-toframe coherency in the visualization as the prediction time is advanced. We explain the underlying algorithm and definitions, and give a number of examples of how our algorithm performs for several different storm predictions, and for two different sources of predicted path ensembles.
  • Item
    Freeform Shadow Boundary Editing
    (The Eurographics Association and Blackwell Publishing Ltd., 2013) Mattausch, Oliver; Igarashi, Takeo; Wimmer, Michael; I. Navazo, P. Poulin
    We present an algorithm for artistically modifying physically based shadows. With our tool, an artist can directly edit the shadow boundaries in the scene in an intuitive fashion similar to freeform curve editing. Our algorithm then makes these shadow edits consistent with respect to varying light directions and scene configurations, by creating a shadow mesh from the new silhouettes. The shadow mesh helps a modified shadow volume algorithm cast shadows that conform to the artistic shadow boundary edits, while providing plausible interaction with dynamic environments, including animation of both characters and light sources. Our algorithm provides significantly more fine-grained local and direct control than previous artistic light editing methods, which makes it simple to adjust the shadows in a scene to reach a particular effect, or to create interesting shadow shapes and shadow animations. All cases are handled with a single intuitive interface, be it soft shadows, or (self-)shadows on arbitrary receivers.
  • Item
    Progressive Splatting of Continuous Scatterplots and Parallel Coordinates
    (The Eurographics Association and Blackwell Publishing Ltd., 2011) Heinrich, Julian; Bachthaler, S.; Weiskopf, Daniel; H. Hauser, H. Pfister, and J. J. van Wijk
    Continuous scatterplots and parallel coordinates are used to visualize multivariate data defined on a continuous domain. With the existing techniques, rendering such plots becomes prohibitively slow, especially for large scientific datasets. This paper presents a scalable and progressive rendering algorithm for continuous data plots that allows exploratory analysis of large datasets at interactive framerates. The algorithm employs splatting to produce a series of plots that are combined using alpha blending to achieve a progressively improving image. For each individual frame, splats are obtained by transforming Gaussian density kernels from the 3-D domain of the input dataset to the respective data domain. A closed-form analytic description of the resulting splat footprints is derived to allow pre-computation of splat textures for efficient GPU rendering. The plotting method is versatile because it supports arbitrary reconstruction or interpolation schemes for the input data and the splatting technique is scalable because it chooses splat samples independently from the size of the input dataset. Finally, the effectiveness of the method is compared to existing techniques regarding rendering performance and quality.