Search Results

Now showing 1 - 10 of 54
  • Item
    Parameterized Skin for Rendering Flushing Due to Exertion
    (The Eurographics Association, 2016) Vieira, Teresa; Angus Forbes and Lyn Bartram
    It is known that physical exercise increases bloodflow and flushing of the facial skin. When digital artists hand-paint the textures for animation of realistic effects such as flushing due to exertion, they observe real-life references and use their creativity. This process is empirical and time-consuming, with artists often using the same textures across all facial expressions. The problem is that there is a lack of guidelines on how skin color changes due to exertion, that is only surpassed when scans of facial appearance are used. However facial appearance scans are best suited when creating digital doubles and do not easily fit different characters. Here, we present a novel delta-parameterized method that guides artists in painting the textures for animation of flushing due to physical exertion. To design the proposed method we have analyzed skin color differences in L*a*b* color space, from 34 human subjects' portraits before and after physical exercise. We explain the experiment setup configuration, statistical analysis and the resulting delta color differences from which we derived our method parameters. We illustrate how our method suits any skin type and character style. The proposed method was reviewed by texture artists, who find it useful and that it may help render more realistic flushed exertion expressions, compared to state of the art, guesswork techniques.
  • Item
    c-Space: Time-evolving 3D Models (4D) from Heterogeneous Distributed Video Sources
    (The Eurographics Association, 2016) Ritz, Martin; Knuth, Martin; Domajnko, Matevz; Posniak, Oliver; Santos, Pedro; Fellner, Dieter W.; Chiara Eva Catalano and Livio De Luca
    We introduce c-Space, an approach to automated 4D reconstruction of dynamic real world scenes, represented as time-evolving 3D geometry streams, available to everyone. Our novel technique solves the problem of fusing all sources, asynchronously captured from multiple heterogeneous mobile devices around a dynamic scene at a real word location. To this end all captured input is broken down into a massive unordered frame set, sorting the frames along a common time axis, and finally discretizing the ordered frame set into a time-sequence of frame subsets, each subject to photogrammetric 3D reconstruction. The result is a time line of 3D models, each representing a snapshot of the scene evolution in 3D at a specific point in time. Just like a movie is a concatenation of time-discrete frames, representing the evolution of a scene in 2D, the 4D frames reconstructed by c-Space line up to form the captured and dynamically changing 3D geometry of an event over time, thus enabling the user to interact with it in the very same way as with a static 3D model. We do image analysis to automatically maximize the quality of results in the presence of challenging, heterogeneous and asynchronous input sources exhibiting a wide quality spectrum. In addition we show how this technique can be integrated as a 4D reconstruction web service module, available to mobile end-users.
  • Item
    The Material Definition Language
    (The Eurographics Association, 2015) Kettner, L.; Raab, M.; Seibert, D.; Jordan, J.; Keller, A.; Reinhard Klein and Holly Rushmeier
    We introduce the physically-based Material Definition Language (MDL). Based on the principle of strictly separating material definition and rendering algorithms, each MDL material is applicable across different rendering paradigms ranging from realtime over interactive solutions to advanced light transport simulation.
  • Item
    MTV-Player: Interactive Spatio-Temporal Exploration of Compressed Large-Scale Time-Varying Rectilinar Scalar Volumes
    (The Eurographics Association, 2019) Díaz, Jose; Marton, Fabio; Gobbetti, Enrico; Agus, Marco and Corsini, Massimiliano and Pintus, Ruggero
    We present an approach for supporting fully interactive exploration of massive time-varying rectilinear scalar volumes on commodity platforms. We decompose each frame into a forest of bricked octrees. Each brick is further subdivided into smaller blocks, which are compactly approximated by quantized variable-length sparse linear combinations of prototype blocks stored in a data-dependent dictionary learned from the input sequence. This variable bit-rate compact representation, obtained through a tolerance-driven learning and approximation process, is stored in a GPU-friendly format that supports direct adaptive streaming to the GPU with spatial and temporal random access. An adaptive compression-domain renderer closely coordinates off-line data selection, streaming, decompression, and rendering. The resulting system provides total control over the spatial and temporal dimensions of the data, supporting the same exploration metaphor as traditional video players. Since we employ a highly compressed representation, the bandwidth provided by current commodity platforms proves sufficient to fully stream and render dynamic representations without relying on partial updates, thus avoiding any unwanted dynamic effects introduced by current incremental loading approaches. Moreover, our variable-rate encoding based on sparse representations provides high-quality approximations, while offering real-time decoding and rendering performance. The quality and performance of our approach is demonstrated on massive time-varying datasets at the terascale, which are nonlinearly explored at interactive rates on a commodity graphics PC.
  • Item
    Supporting Urban Search & Rescue Mission Planning through Visualization-Based Analysis
    (The Eurographics Association, 2014) Bock, Alexander; Kleiner, Alexander; Lundberg, Jonas; Ropinski, Timo; Jan Bender and Arjan Kuijper and Tatiana von Landesberger and Holger Theisel and Philipp Urban
    We propose a visualization system for incident commanders in urban search & rescue scenarios that supports access path planning for post-disaster structures. Utilizing point cloud data acquired from unmanned robots, we provide methods for assessment of automatically generated paths. As data uncertainty and a priori unknown information make fully automated systems impractical, we present a set of viable access paths, based on varying risk factors, in a 3D environment combined with the visual analysis tools enabling informed decisions and trade-offs. Based on these decisions, a responder is guided along the path by the incident commander, who can interactively annotate and reevaluate the acquired point cloud to react to the dynamics of the situation. We describe design considerations for our system, technical realizations, and discuss the results of an expert evaluation.
  • Item
    Watertight Scenes from Urban LiDAR and Planar Surfaces
    (The Eurographics Association and Blackwell Publishing Ltd., 2013) Kreveld, Marc van; Lankveld, Thijs van; Veltkamp, Remco C.; Yaron Lipman and Hao Zhang
    The demand for large geometric models is increasing, especially of urban environments. This has resulted in production of massive point cloud data from images or LiDAR. Visualization and further processing generally require a detailed, yet concise representation of the scene's surfaces. Related work generally either approximates the data with the risk of over-smoothing, or interpolates the data with excessive detail. Many surfaces in urban scenes can be modeled more concisely by planar approximations. We present a method that combines these polygons into a watertight model. The polygon-based shape is closed with free-form meshes based on visibility information. To achieve this, we divide 3-space into inside and outside volumes by combining a constrained Delaunay tetrahedralization with a graph-cut. We compare our method with related work on several large urban LiDAR data sets. We construct similar shapes with a third fewer triangles to model the scenes. Additionally, our results are more visually pleasing and closer to a human modeler's description of urban scenes using simple boxes.
  • Item
    Interactive Exploration of Gigantic Point Clouds on Mobile Devices
    (The Eurographics Association, 2012) Rodriguez, Marcos Balsa; Gobbetti, Enrico; Marton, Fabio; Pintus, Ruggero; Pintore, Giovanni; Tinti, Alex; David Arnold and Jaime Kaminski and Franco Niccolucci and Andre Stork
    New embedded CPUs that sport powerful graphics chipsets have the potential to make complex 3D applications feasible on mobile devices. In this paper, we present a scalable architecture and its implementation for mobile exploration of large point clouds, which are nowadays ubiquitous in the cultural heritage domain thanks to the increased performance and availability of 3D scanning techniques. The quality and performance of our approach is demonstrated on gigantic point clouds, interactively explored on Apple iPad and iPhone devices using in variety of network settings. Applications of the technology include on-site exploration during scanning campaigns and promotion of cultural heritage artifacts.
  • Item
    Improving the Dwivedi Sampling Scheme
    (The Eurographics Association and John Wiley & Sons Ltd., 2016) Meng, Johannes; Hanika, Johannes; Dachsbacher, Carsten; Elmar Eisemann and Eugene Fiume
    Despite recent advances in Monte Carlo rendering techniques, dense, high-albedo participating media such as wax or skin still remain a difficult problem. In such media, random walks tend to become very long, but may still lead to a large contribution to the image. The Dwivedi sampling scheme, which is based on zero variance random walks, biases the sampling probability distributions to exit the medium as quickly as possible. This can reduce variance considerably under the assumption of a locally homogeneous medium with constant phase function. Prior work uses the normal at the Point of Entry as the bias direction. We demonstrate that this technique can fail in common scenarios such as thin geometry with a strong backlight. We propose two new biasing strategies, Closest Point and Incident Illumination biasing, and show that these techniques can speed up convergence by up to an order of magnitude. Additionally, we propose a heuristic approach for combining biased and classical sampling techniques using Multiple Importance Sampling.
  • Item
    Interactive Steering of Mesh Animations
    (The Eurographics Association, 2012) Vögele, Anna; Hermann, Max; Krüger, Björn; Klein, Reinhard; Jehee Lee and Paul Kry
    Creating geometrically detailed mesh animations is an involved and resource-intensive process in digital content creation. In this work we present a method to rapidly combine available sparse motion capture data with existing mesh sequences to produce a large variety of new animations. The key idea is to model shape changes correlated to the pose of the animated object via a part-based statistical shape model. We observe that compact linear models suffice for a segmentation into nearly rigid parts. The same segmentation further guides the parameterization of the pose which is learned in conjunction with the marker movement. Besides the inherent high geometric detail, further benefits of the presented method arise from its robustness against errors in segmentation and pose parameterization. Due to efficiency of both learning and synthesis phase, our model allows to interactively steer virtual avatars based on few markers extracted from video data or input devices like the Kinect sensor.
  • Item
    Measuring Realism in Hair Rendering
    (The Eurographics Association, 2013) Ramesh, Girish; Turner, Martin J.; Silvester Czanner and Wen Tang
    Visualisation of hair is an extremely complex problem within the field of Computer Graphics. Over the last 10 years, huge strides have been made in the area of physically-based hair rendering, giving rise to many applications in various fields other than the graphics industry. Given the number of models for hair rendering, there is no well defined evaluation process to measure the realism in the hair models in use today. For this work-in-progress paper, we propose an evaluation process not only to evaluate the realism in hair rendering models, but also examine the various effects that contribute to its realistic perception. This builds an index of realism based on experiments with computer generated models, and then proposes comparing the results with values obtained from computational tomography, optical imaging and goniophotometer readings.