45 results
Search Results
Now showing 1 - 10 of 45
Item Camera Motion Graphs(The Eurographics Association, 2014) Sanokho, Cunka Bassirou; Desoche, Clement; Merabti, Billal; Li, Tsai-yen; Christie, Marc; Vladlen Koltun and Eftychios SifakisThis paper presents Camera Motion Graphs, a technique to easily and efficiently generate cinematographic sequences in real-time dynamic 3D environments. A camera motion graph consists of (i) pieces of original camera trajectories attached to one or multiple targets, (ii) generated continuous transitions between camera trajectories and (iii) transitions representing cuts between camera trajectories. Pieces of original camera trajectories are built by extracting camera motions from real movies using vision-based techniques, or relying on motion capture techniques using a virtual camera system. A transformation is proposed to recompute all the camera trajectories in a normalized representation, making camera paths easily adaptable to new 3D environments through a specific retargeting technique. The camera motion graph is then constructed by sampling all pairs of camera trajectories and evaluating the possibility and quality of continuous or cut transitions. Results illustrate the simplicity of the technique, its adaptability to different 3D environments and its efficiency.Item Example-based Haze Removal with two-layer Gaussian Process Regressions(The Eurographics Association, 2014) Fan, Xin; Gao, Renjie; Wang, Yi; John Keyser and Young J. Kim and Peter WonkaHazy images suffer from low visibility and contrast. Researchers have devoted great efforts to haze removal with the prior assumptions on observations in the past decade. However, these priors from observations can provide limited information for the restoration of high quality, and the assumptions are not always true for generic images in practice. On the other hand, visual data are increasing as the popularity of imaging devices. In this paper, we present a learning framework for haze removal based on two-layer Gaussian Process Regressions (GPR). By using training examples, the two-layer GPRs establish direct relationships from the input image to the depth-dependent transmission, and meanwhile learn local image priors to further improve the estimation. We also provide a method to collect training pairs for images of natural scenes. Both qualitative and quantitative comparisons on simulated and real-world hazy images demonstrate the effectiveness of the approach, especially when white or bright objects and heavy haze regions appear and existing dehazing methods may fail.Item GPU Visualization and Voxelization of Yarn-Level Cloth(The Eurographics Association, 2014) Lopez-Moreno, Jorge; Cirio, Gabriel; Miraut, David; Otaduy, Miguel Angel; Adolfo Munoz and Pere-Pau VazquezMost popular methods in cloth rendering rely on volumetric data in order to model complex optical phenomena such as sub-surface scattering. Previous work represents yarns as a sequence of identical but rotated crosssections. While these approaches are able to produce very realistic illumination models, the required volumetric representation is difficult to compute and render, forfeiting any interactive feedback. In this paper, we introduce a method based on the GPU for simultaneous visualization and voxelization, suitable for both interactive and offline rendering. Our method can interactively voxelize millions of polygons into a 3D texture, generating a volume with sub-voxel accuracy which is suitable even for high-density weaving such as linen.Item Towards Efficient Online Compression of Incrementally Acquired Point Clouds(The Eurographics Association, 2014) Golla, Tim; Schwartz, Christopher; Klein, Reinhard; Jan Bender and Arjan Kuijper and Tatiana von Landesberger and Holger Theisel and Philipp UrbanWe present a framework for the online compression of incrementally acquired point cloud data. For this, we extend an existing vector quantization-based offline point cloud compression algorithm to handle the challenges that arise from the envisioned online scenario. In particular, we learn a codebook in advance from training data and replace a computationally demanding part of the algorithm with a faster alternative. We show that the compression ratios and reconstruction quality are comparable to the offline version while the speed is sufficiently improved. Furthermore, we investigate how well codebooks that are generated from different amounts of training data generalize to larger sets of point cloud data.Item Evaluating the Curvature Analysis as a Key Feature for the Semantic Description of Architectural Elements(The Eurographics Association, 2014) Adrian, Julie; Buglio, David Lo; Luca, Livio De; Reinhard Klein and Pedro SantosThe recent developments in the fields of photogrammetry and laser scanning, have made possible mass acquisitions of heritage artifacts with a particularly high level of geometric accuracy. A processing of the digital model will be necessary to isolate some characteristics in order to carry on an analysis of the architectural object. In this poster, the potentialities of the curvature maps, extracted from digital acquisitions, are defined to conduct the study on the morphology of architectural elements. The current work focuses on the technical and theoretical issues that will ultimately result in an average surface signature. This will allow to identify the degree of remoteness of each attribute.Item Revisiting Perceptually Optimized Color Mapping for High-Dimensional Data Analysis(The Eurographics Association, 2014) Mittelstädt, Sebastian; Bernard, Jürgen; Schreck, Tobias; Steiger, Martin; Kohlhammer, Jörn; Keim, Daniel A.; N. Elmqvist and M. Hlawitschka and J. KennedyColor is one of the most effective visual variables since it can be combined with other mappings and encodeinformation without using any additional space on the display. An important example where expressing additionalvisual dimensions is direly needed is the analysis of high-dimensional data. The property of perceptual linearity isdesirable in this application, because the user intuitively perceives clusters and relations among multi-dimensionaldata points. Many approaches use two-dimensional colormaps in their analysis, which are typically created byinterpolating in RGB, HSV or CIELAB color spaces. These approaches share the problem that the resulting colorsare either saturated and discriminative but not perceptual linear or vice versa. A solution that combines bothadvantages has been previously introduced by Kaski et al.; yet, this method is to date underutilized in InformationVisualization according to our literature analysis. The method maps high-dimensional data points into the CIELABcolor space by maintaining the relative perceived distances of data points and color discrimination. In this paper,we generalize and extend the method of Kaski et al. to provide perceptual uniform color mapping for visual analysisof high-dimensional data. Further, we evaluate the method and provide guidelines for different analysis tasks.Item Latency Considerations of Depth-first GPU Ray Tracing(The Eurographics Association, 2014) Guthe, Michael; Eric Galin and Michael WandDespite the potential divergence of depth-first ray tracing [AL09], it is nevertheless the most efficient approach on massively parallel graphics processors. Due to the use of specialized caching strategies that were originally developed for texture access, it has been shown to be compute rather than bandwidth limited. Especially with recents developments however, not only the raw bandwidth, but also the latency for both memory access and read after write register dependencies can become a limiting factor. In this paper we will analyze the memory and instruction dependency latencies of depth first ray tracing. We will show that ray tracing is in fact latency limited on current GPUs and propose three simple strategies to better hide the latencies. This way, we come significantly closer to the maximum performance of the GPU.Item Image-Based Flow Transfer(The Eurographics Association, 2014) Bosch, Carles; Patow, Gustavo A.; Adolfo Munoz and Pere-Pau VazquezWeathering phenomena are ubiquitous to urban environments. In particular, fluid flow becomes a specially representative but difficult phenomenon to reproduce. In order to produce realistic flow effects, it is possible to take advantage of the widespread availability of flow images on the internet, which can be used to gather key information about the flow. In this paper we present a technique that allows transferring flow phenomena between photographs, adapting the flow to the target image and giving the user flexibility and control through specifically tailored parameters. This is done through two types of control curves: a fitted theoretical curve for the mass of deposited material, and a control curve extracted from the images for the color. This way, the user has a set of simple and intuitive parameters and tools to control the flow phenomena on the target image. To illustrate our technique, we present a complete set of images that somewhat cover a large range of flow phenomena in urban environments.Item A Survey of GPU-Based Large-Scale Volume Visualization(The Eurographics Association, 2014) Beyer, Johanna; Hadwiger, Markus; Pfister, Hanspeter; R. Borgo and R. Maciejewski and I. ViolaThis survey gives an overview of the current state of the art in GPU techniques for interactive large-scale volume visualization. Modern techniques in this field have brought about a sea change in how interactive visualization and analysis of giga-, tera-, and petabytes of volume data can be enabled on GPUs. In addition to combining the parallel processing power of GPUs with out-of-core methods and data streaming, a major enabler for interactivity is making both the computational and the visualization effort proportional to the amount and resolution of data that is actually visible on screen, i.e., ''output-sensitive'' algorithms and system designs. This leads to recent outputsensitive approaches that are ''ray-guided,'' ''visualization-driven,'' or ''display-aware.'' In this survey, we focus on these characteristics and propose a new categorization of GPU-based large-scale volume visualization techniques based on the notions of actual output-resolution visibility and the current working set of volume bricks-the current subset of data that is minimally required to produce an output image of the desired display resolution. For our purposes here, we view parallel (distributed) visualization using clusters as an orthogonal set of techniques that we do not discuss in detail but that can be used in conjunction with what we discuss in this survey.Item The MAM2014 Sample Set(The Eurographics Association, 2014) Rushmeier, Holly; Reinhard Klein and Holly RushmeierModeling the material appearance of physical materials requires access to the materials. Sets of identical physical material models were prepared for distribution at the workshop on material appearance modeling 2014 (MAM2014). The sample set is intended to facilitate the comparison of measurements and models from different laboratories and psychophysical experiments comparing simulated and physical appearance.