Volume 31 (2012)
Permanent URI for this community
Browse
Browsing Volume 31 (2012) by Issue Date
Now showing 1 - 20 of 249
Results Per Page
Sort Options
Item Automatic Stream Surface Seeding: A Feature Centered Approach(The Eurographics Association and Blackwell Publishing Ltd., 2012) Edmunds, Matt; Laramee, Robert S.; Malki, Rami; Masters, Ian; Croft, Nick; Chen, Guoning; Zhang, Eugene; S. Bruckner, S. Miksch, and H. PfisterThe ability to capture and visualize information within the flow poses challenges for visualizing 3D flow fields. Stream surfaces are one of many useful integration based techniques for visualizing 3D flow. However seeding integral surfaces can be challenging. Previous research generally focuses on manual placement of stream surfaces. Little attention has been given to the problem of automatic stream surface seeding. This paper introduces a novel automatic stream surface seeding strategy based on vector field clustering. It is important that the user can define and target particular characteristics of the flow. Our framework provides this ability. The user is able to specify different vector clustering parameters enabling a range of abstraction for the density and placement of seeding curves and their associated stream surfaces. We demonstrate the effectiveness of this automatic stream surface approach on a range of flow simulations and incorporate illustrative visualization techniques. Domain expert evaluation of the results provides valuable insight into the users requirements and effectiveness of our approach.Item State of the Art Report on Video‐Based Graphics and Video Visualization(The Eurographics Association and Blackwell Publishing Ltd., 2012) Borgo, R.; Chen, M.; Daubney, B.; Grundy, E.; Heidemann, G.; Höferlin, B.; Höferlin, M.; Leitte, H.; Weiskopf, D.; Xie, X.; Holly Rushmeier and Oliver DeussenIn recent years, a collection of new techniques which deal with video as input data, emerged in computer graphics and visualization. In this survey, we report the state of the art in video‐based graphics and video visualization. We provide a review of techniques for making photo‐realistic or artistic computer‐generated imagery from videos, as well as methods for creating summary and/or abstract visual representations to reveal important features and events in videos. We provide a new taxonomy to categorize the concepts and techniques in this newly emerged body of knowledge. To support this review, we also give a concise overview of the major advances in automated video analysis, as some techniques in this field (e.g. feature extraction, detection, tracking and so on) have been featured in video‐based modelling and rendering pipelines for graphics and visualization.In recent years, a collection of new techniques which deal with video as input data, emerged in computer graphics and visualization. In this survey, we report the state of the art in video‐based graphics and video visualization. We provide a review of techniques for making photo‐realistic or artistic computer‐generated imagery from videos, as well as methods for creating summary and/or abstract visual representations to reveal important features and events in videos. We provide a new taxonomy to categorize the concepts and techniques in this newly‐emerged body of knowledge. To support this review, we also give a concise overview of the major advances in automated video analysis, as some techniques in this field (e.g., feature extraction, detection, tracking, and so on) have been featured in video‐based modeling and rendering pipelines for graphics and visualization.Item Illustrative Membrane Clipping(The Eurographics Association and Blackwell Publishing Ltd., 2012) Birkeland, Åsmund; Bruckner, Stefan; Brambilla, Andrea; Viola, Ivan; S. Bruckner, S. Miksch, and H. PfisterClipping is a fast, common technique for resolving occlusions. It only requires simple interaction, is easily understandable, and thus has been very popular for volume exploration. However, a drawback of clipping is that the technique indiscriminately cuts through features. Illustrators, for example, consider the structures in the vicinity of the cut when visualizing complex spatial data and make sure that smaller structures near the clipping plane are kept in the image and not cut into fragments. In this paper we present a new technique, which combines the simple clipping interaction with automated selective feature preservation using an elastic membrane. In order to prevent cutting objects near the clipping plane, the deformable membrane uses underlying data properties to adjust itself to salient structures. To achieve this behaviour, we translate data attributes into a potential field which acts on the membrane, thus moving the problem of deformation into the soft-body dynamics domain. This allows us to exploit existing GPU-based physics libraries which achieve interactive frame rates. For manual adjustment, the user can insert additional potential fields, as well as pinning the membrane to interesting areas. We demonstrate that our method can act as a flexible and non-invasive replacement of traditional clipping planes.Item New Bounds on the Size of Optimal Meshes(The Eurographics Association and Blackwell Publishing Ltd., 2012) Sheehy, Donald R.; Eitan Grinspun and Niloy MitraThe theory of optimal size meshes gives a method for analyzing the output size (number of simplices) of a Delaunay refinement mesh in terms of the integral of a sizing function over the input domain. The input points define a maximal such sizing function called the feature size. This paper presents a way to bound the feature size integral in terms of an easy to compute property of a suitable ordering of the point set. The key idea is to consider the pacing of an ordered point set, a measure of the rate of change in the feature size as points are added one at a time. In previous work, Miller et al. showed that if an ordered point set has pacing Φ, then the number of vertices in an optimal mesh will be O(Φ<sup>d</sup>n), where d is the input dimension. We give a new analysis of this integral showing that the output size is only<br> θ(n+nlogΦ). The new analysis tightens bounds from several previous results and provides matching lower bounds. Moreover, it precisely characterizes inputs that yield outputs of size O(n).Item Visualization for the Physical Sciences(The Eurographics Association and Blackwell Publishing Ltd., 2012) Lipşa, Dan R.; Laramee, Robert S.; Cox, Simon J.; Roberts, Jonathan C.; Walker, Rick; Borkin, Michelle A.; Pfister, Hanspeter; Holly Rushmeier and Oliver DeussenClose collaboration with other scientific fields is an important goal for the visualization community. Yet engaging in a scientific collaboration can be challenging. The physical sciences, namely astronomy, chemistry, earth sciences and physics, exhibit an extensive range of research directions, providing exciting challenges for visualization scientists and creating ample possibilities for collaboration. We present the first survey of its kind that provides a comprehensive view of existing work on visualization for the physical sciences. We introduce novel classification schemes based on application area, data dimensionality and main challenge addressed, and apply these classifications to each contribution from the literature. Our survey helps in understanding the status of current research and serves as a useful starting point for those interested in visualization for the physical sciences.Close collaboration with other scientific fields is an important goal for the visualization community. Yet engaging in a scientific collaboration can be challenging. The physical sciences, namely astronomy, chemistry, earth sciences and physics, exhibit an extensive range of research directions, providing exciting challenges for visualization scientists and creating ample possibilities for collaboration. We present the first survey of its kind that provides a comprehensive view of existing work on visualization for the physical sciences.Item Microtiles: Extracting Building Blocks from Correspondences(The Eurographics Association and Blackwell Publishing Ltd., 2012) Kalojanov, Javor; Bokeloh, Martin; Wand, Michael; Guibas, Leonidas; Seidel, Hans-Peter; Slusallek, Philipp; Eitan Grinspun and Niloy MitraIn this paper, we develop a theoretical framework for characterizing shapes by building blocks. We address two questions: First, how do shape correspondences induce building blocks? For this, we introduce a new representation for structuring partial symmetries (partial self-correspondences), which we call "microtiles". Starting from input correspondences that form point-wise equivalence relations, microtiles are obtained by grouping connected components of points that share the same set of symmetry transformations. The decomposition is unique, requires no parameters beyond the input correspondences, and encodes the partial symmetries of all subsets of the input. The second question is: What is the class of shapes that can be assembled from these building blocks? Here, we specifically consider r-similarity as correspondence model, i.e., matching of local r-neighborhoods. Our main result is that the microtiles of the partial r-symmetries of an object S can build all objects that are (r+e)-similar to S for any e>0. Again, the construction is unique. Furthermore, we give necessary conditions for a set of assembly rules for the pairwise connection of tiles. We describe a practical algorithm for computing microtile decompositions under rigid motions, a corresponding prototype implementation, and conduct a number of experiments to visualize the structural properties in practice.Item Tessellation-Independent Smooth Shadow Boundaries(The Eurographics Association and Blackwell Publishing Ltd., 2012) Mattausch, Oliver; Scherzer, Daniel; Wimmer, Michael; Igarashi, Takeo; Fredo Durand and Diego GutierrezWe propose an efficient and light-weight solution for rendering smooth shadow boundaries that do not reveal the tessellation of the shadow-casting geometry. Our algorithm reconstructs the smooth contours of the underlying mesh and then extrudes shadow volumes from the smooth silhouettes to render the shadows. For this purpose we propose an improved silhouette reconstruction using the vertex normals of the underlying smooth mesh. Then our method subdivides the silhouette loops until the contours are sufficiently smooth and project to smooth shadow boundaries. This approach decouples the shadow smoothness from the tessellation of the geometry and can be used to maintain equally high shadow quality for multiple LOD levels. It causes only a minimal change to the fill rate, which is the well-known bottleneck of shadow volumes, and hence has only small overhead.Item Metering for Exposure Stacks(The Eurographics Association and John Wiley and Sons Ltd., 2012) Gallo, Orazio; Tico, Marius; Manduchi, Roberto; Gelfand, Natasha; Pulli, Kari; P. Cignoni and T. ErtlWhen creating a High-Dynamic-Range (HDR) image from a sequence of differently exposed Low-Dynamic-Range (LDR) images, the set of LDR images is usually generated by sampling the space of exposure times with a geometric progression and without explicitly accounting for the distribution of irradiance values of the scene. We argue that this choice can produce sub-optimal results both in terms of the number of acquired pictures and the quality of the resulting HDR image. This paper presents a method to estimate the full irradiance histogram of a scene, and a strategy to select the set of exposures that need to be acquired. Our selection usually requires a smaller or equal set of LDRs, yet produces higher quality HDR images.Item A Qualitative Study on the Exploration of Temporal Changes in Flow Maps with Animation and Small-Multiples(The Eurographics Association and Blackwell Publishing Ltd., 2012) Boyandin, Ilya; Bertini, Enrico; Lalanne, Denis; S. Bruckner, S. Miksch, and H. PfisterWe present a qualitative user study analyzing findings made while exploring changes over time in spatial interactions. We analyzed findings made by the study participants with flow maps, one of the most popular representations of spatial interactions, using animation and small-multiples as two alternative ways of representing temporal changes. Our goal was not to measure the subjects' performance with the two views, but to find out whether there are qualitative differences between the types of findings users make with these two representations. To achieve this goal we performed a deep analysis of the collected findings, the interaction logs, and the subjective feedback from the users. We observed that with animation the subjects tended to make more findings concerning geographically local events and changes between subsequent years. With small-multiples more findings concerning longer time periods were made. Besides, our results suggest that switching from one view to the other might lead to an increase in the numbers of findings of specific types made by the subjects which can be beneficial for certain tasks.Item A Cell-Based Light Interaction Model for Human Blood(The Eurographics Association and John Wiley and Sons Ltd., 2012) Yim, Daniel; Baranoski, Gladimir V. G.; Kimmel, Brad W.; Chen, T. Francis; Miranda, Erik; P. Cignoni and T. ErtlThe development of predictive appearance models for organic tissues is a challenging task due to the inherent complexity of these materials. In this paper, we closely examine the biophysical processes responsible for the appearance attributes of whole blood, one the most fundamental of these materials. We describe a new appearance model that simulates the mechanisms of light propagation and absorption within the cellular and fluid portions of this specialized tissue. The proposed model employs a comprehensive, and yet flexible first principles approach based on the morphological, optical and biochemical properties of blood cells. This approach allows for environment driven changes in the cells' anatomy and orientation to be appropriately included into the light transport simulations. The correctness and predictive capabilities of the proposed model are quantitatively and qualitatively evaluated through comparisons of modeled results with actual measured data and experimental observations reported in the scientific literature. Its incorporation into rendering systems is illustrated through images of blood samples depicting appearance variations controlled by physiologically meaningful parameters. Besides the contributions to the modeling of material appearance, the research presented in this paper is also expected to have applications in a wide range of biomedical areas, from optical diagnostics to the visualization and noninvasive imaging of blood-perfused tissues.Item Interactive Multi-perspective Imagery from Photos and Videos(The Eurographics Association and John Wiley and Sons Ltd., 2012) Lieng, Henrik; Tompkin, James; Kautz, Jan; P. Cignoni and T. ErtlPhotographs usually show a scene from a single perspective. However, as commonly seen in art, scenes and objects can be visualized from multiple perspectives. Making such images manually is time consuming and tedious. We propose a novel system for designing multi-perspective images and videos. First, the images in the input sequence are aligned using structure from motion. This enables us to track feature points across the sequence. Second, the user chooses portal polygons in a target image into which different perspectives are to be embedded. The corresponding image regions from the other images are then copied into these portals. Due to the tracking feature and automatic warping, this approach is considerably faster than current tools. We explore a wide range of artistic applications using our system with image and video data, such as looking around corners and up and down stair cases, recursive multi-perspective imaging, cubism and panoramas.Item Black is Green: Adaptive Color Transformation For Reduced Ink Usage(The Eurographics Association and John Wiley and Sons Ltd., 2012) Shapira, Lior; Oicherman, Boris; P. Cignoni and T. ErtlA vast majority of color transformations applied to an image in the digital press industry are static and precalculated. In order to achieve the best quality on a wide variety of different images, these transformations tend to be highly conservative with respect to the use of black ink. This results in excessive use of inks, which has a negative economic and environmental impact. We present a method for dynamic computation of color transformation based on image content, with the aim to reduce ink usage. We analyze the image, and predict areas in which quality artifacts that may result from such a reduction will be masked by the image content. These areas include detailed textures, noisy areas and structure. We then replace the image CMYK values by a new combination with increased black. Our algorithm ensures negligible color shifts in the resulting image, and no visible reduction in quality. We achieve an average of over 10% ink savings.Item Procedural Interpolation of Historical City Maps(The Eurographics Association and John Wiley and Sons Ltd., 2012) Krecklau, Lars; Manthei, Christopher; Kobbelt, Leif; P. Cignoni and T. ErtlWe propose a novel approach for the temporal interpolation of city maps. The input to our algorithm is a sparse set of historical city maps plus optional additional knowledge about construction or destruction events. The output is a fast forward animation of the city map development where roads and buildings are constructed and destroyed over time in order to match the sparse historical facts and to look plausible where no precise facts are available. A smooth transition between any real-world data could be interesting for educational purposes, because our system conveys an intuition of the city development. The insertion of data, like when and where a certain building or road existed, is efficiently performed by an intuitive graphical user interface. Our system collects all this information into a global dependency graph of events. By propagating time intervals through the dependency graph we can automatically derive the earliest and latest possible date for each event which are guaranteeing temporal as well as geographical consistency (e.g. buildings can only appear along roads that have been constructed before). During the simulation of the city development, events are scheduled according to a score function that rates the plausibility of the development (e.g. cities grow along major roads). Finally, the events are properly distributed over time to control the dynamics of the city development. Based on the city map animation we create a procedural city model in order to render a 3D animation of the city development over decades.Item Importance Caching for Complex Illumination(The Eurographics Association and John Wiley and Sons Ltd., 2012) Georgiev, Iliyan; Krivánek, Jaroslav; Popov, Stefan; Slusallek, Philipp; P. Cignoni and T. ErtlRealistic rendering requires computing the global illumination in the scene, and Monte Carlo integration is the best-known method for doing that. The key to good performance is to carefully select the costly integration samples, which is usually achieved via importance sampling. Unfortunately, visibility is difficult to factor into the importance distribution, which can greatly increase variance in highly occluded scenes with complex illumination. In this paper, we present importance caching - a novel approach that selects those samples with a distribution that includes visibility, while maintaining efficiency by exploiting illumination smoothness. At a sparse set of locations in the scene, we construct and cache several types of probability distributions with respect to a set of virtual point lights (VPLs), which notably include visibility. Each distribution type is optimized for a specific lighting condition. For every shading point, we then borrow the distributions from nearby cached locations and use them for VPL sampling, avoiding additional bias. A novel multiple importance sampling framework finally combines the many estimators. In highly occluded scenes, where visibility is a major source of variance in the incident radiance, our approach can reduce variance by more than an order of magnitude. Even in such complex scenes we can obtain accurate and low noise previews with full global illumination in a couple of seconds on a single mid-range CPU.Item Robust Image Retargeting via Axis-Aligned Deformation(The Eurographics Association and John Wiley and Sons Ltd., 2012) Panozzo, Daniele; Weber, Ofir; Sorkine, Olga; P. Cignoni and T. ErtlWe propose the space of axis-aligned deformations as the meaningful space for content-aware image retargeting. Such deformations exclude local rotations, avoiding harmful visual distortions, and they are parameterized in 1D. We show that standard warping energies for image retargeting can be minimized in the space of axis-aligned deformations while guaranteeing that bijectivity constraints are satisfied, leading to high-quality, smooth and robust retargeting results. Thanks to the 1D parameterization, our method only requires solving a small quadratic program, which can be done within a few milliseconds on the CPU with no precomputation overhead. We demonstrate how the image size and the saliency map can be changed in real time with our approach, and present results on various input images, including the RETARGETME benchmark. We compare our results with six other algorithms in a user study to demonstrate that the space of axis-aligned deformations is suitable for the problem at hand.Item Data Driven Surface Reflectance from Sparse and Irregular Samples(The Eurographics Association and John Wiley and Sons Ltd., 2012) Ruiters, Roland; Schwartz, Christopher; Klein, Reinhard; P. Cignoni and T. ErtlIn recent years, measuring surface reflectance has become an established method for high quality renderings. In this context, especially non-parametric representations got a lot of attention as they allow for a very accurate representation of complex reflectance behavior. However, the acquisition of this data is a challenging task especially if complex object geometry is involved. Capturing images of the object under varying illumination and view conditions results in irregular angular samplings of the reflectance function with a limited angular resolution. Classical data-driven techniques, like tensor factorization, are not well suited for such data sets as they require a resampling of the high dimensional measurement data to a regular grid. This grid has to be on a much higher angular resolution to avoid resampling artifacts which in turn would lead to data sets of enormous size. To overcome these problems we introduce a novel, compact data-driven representation of reflectance functions based on a sum of separable functions which are fitted directly to the irregular set of data without any further resampling. The representation allows for efficient rendering and is also well suited for GPU applications. By exploiting spatial coherence of the reflectance function over the object a very precise reconstruction even of specular materials becomes possible already with a sparse input sampling. This would be impossible using standard data interpolation techniques. Since our algorithm exclusively operates on the compressed representation, it is both efficient in terms of memory use and computational complexity, depending only sub-linearly on the size of the fully tabulated data. The quality of the reflectance function is evaluated on synthetic data sets as ground truth as well as on real world measurements.Item Dependency‐Free Parallel Progressive Meshes(The Eurographics Association and Blackwell Publishing Ltd., 2012) Derzapf, E.; Guthe, M.; Holly Rushmeier and Oliver DeussenThe constantly increasing complexity of polygonal models in interactive applications poses two major problems. First, the number of primitives that can be rendered at real‐time frame rates is currently limited to a few million. Secondly, less than 45 million triangles—with vertices and normal—can be stored per gigabyte. Although the rendering time can be reduced using level‐of‐detail (LOD) algorithms, representing a model at different complexity levels, these often even increase memory consumption. Out‐of‐core algorithms solve this problem by transferring the data currently required for rendering from external devices. Compression techniques are commonly used because of the limited bandwidth. The main problem of compression and decompression algorithms is the only coarse‐grained random access. A similar problem occurs in view‐dependent LOD techniques. Because of the interdependency of split operations, the adaption rate is reduced leading to visible popping artefacts during fast movements. In this paper, we propose a novel algorithm for real‐time view‐dependent rendering of gigabyte‐sized models. It is based on a neighbourhood dependency‐free progressive mesh data structure. Using a per operation compression method, it is suitable for parallel random‐access decompression and out‐of‐core memory management without storing decompressed data.The constantly increasing complexity of polygonal models in interactive applications poses two major problems. First, the number of primitives that can be rendered at real‐time frame rates is currently limited to a few million. Second, less than 45 million triangles with vertices and normal can be stored per gigabyte. While the rendering time can be reduced using level‐of‐detail algorithms, representing a model at different complexity levels, these often even increase memory consumption. Out‐of‐core algorithms solve this problem by transferring the data currently required for rendering from external devices. In this paper, we propose a novel algorithm for real‐time view‐dependent rendering of gigabyte‐sized models.Item Procedural Texture Preview(The Eurographics Association and John Wiley and Sons Ltd., 2012) Lasram, Anass; Lefebvre, Sylvain; Damez, Cyrille; P. Cignoni and T. ErtlProcedural textures usually require spending time testing parameters to realize the diversity of appearances. This paper introduces the idea of a procedural texture preview: A single static image summarizing in a limited pixel space the appearances produced by a given procedure. Unlike grids of thumbnails our previews present a continuous image of appearances, analog to a map. The main challenge is to ensure that most appearances are visible, are allocated a similar pixel area, and are ordered in a smooth manner throughout the preview. To reach this goal, we introduce a new layout algorithm accounting simultaneously for these criteria. After computing a layout of appearances, we rely on by-example texture synthesis to produce the final preview. We demonstrate our approach on a database of production-level procedural textures.Item Beyond Catmull–Clark? A Survey of Advances in Subdivision Surface Methods(The Eurographics Association and Blackwell Publishing Ltd., 2012) Cashman, Thomas J.; Holly Rushmeier and Oliver DeussenSubdivision surfaces allow smooth free‐form surface modelling without topological constraints. They have become a fundamental representation for smooth geometry, particularly in the animation and entertainment industries. This survey summarizes research on subdivision surfaces over the last 15 years in three major strands: analysis, integration into existing systems and the development of new schemes. We also examine the reason for the low adoption of new schemes with theoretical advantages, explain why Catmull–Clark surfaces have become a de facto standard in geometric modelling, and conclude by identifying directions for future research.Item Computing Extremal Quasiconformal Maps(The Eurographics Association and Blackwell Publishing Ltd., 2012) Weber, Ofir; Myles, Ashish; Zorin, Denis; Eitan Grinspun and Niloy MitraConformal maps are widely used in geometry processing applications. They are smooth, preserve angles, and are locally injective by construction. However, conformal maps do not allow for boundary positions to be prescribed. A natural extension to the space of conformal maps is the richer space of quasiconformal maps of bounded conformal distortion. Extremal quasiconformal maps, that is, maps minimizing the maximal conformal distortion, have a number of appealing properties making them a suitable candidate for geometry processing tasks. Similarly to conformal maps, they are guaranteed to be locally bijective; unlike conformal maps however, extremal quasiconformal maps have sufficient flexibility to allow for solution of boundary value problems. Moreover, in practically relevant cases, these solutions are guaranteed to exist, are unique and have an explicit characterization. We present an algorithm for computing piecewise linear approximations of extremal quasiconformal maps for genus-zero surfaces with boundaries, based on Teichmüller's characterization of the dilatation of extremal maps using holomorphic quadratic differentials.We demonstrate that the algorithm closely approximates the maps when an explicit solution is available and exhibits good convergence properties for a variety of boundary conditions.