Search Results

Now showing 1 - 10 of 477
  • Item
    Parameterized Skin for Rendering Flushing Due to Exertion
    (The Eurographics Association, 2016) Vieira, Teresa; Angus Forbes and Lyn Bartram
    It is known that physical exercise increases bloodflow and flushing of the facial skin. When digital artists hand-paint the textures for animation of realistic effects such as flushing due to exertion, they observe real-life references and use their creativity. This process is empirical and time-consuming, with artists often using the same textures across all facial expressions. The problem is that there is a lack of guidelines on how skin color changes due to exertion, that is only surpassed when scans of facial appearance are used. However facial appearance scans are best suited when creating digital doubles and do not easily fit different characters. Here, we present a novel delta-parameterized method that guides artists in painting the textures for animation of flushing due to physical exertion. To design the proposed method we have analyzed skin color differences in L*a*b* color space, from 34 human subjects' portraits before and after physical exercise. We explain the experiment setup configuration, statistical analysis and the resulting delta color differences from which we derived our method parameters. We illustrate how our method suits any skin type and character style. The proposed method was reviewed by texture artists, who find it useful and that it may help render more realistic flushed exertion expressions, compared to state of the art, guesswork techniques.
  • Item
    Interactive Projective Texturing for Non-Photorealistic Shading of Technical 3D Models
    (The Eurographics Association, 2013) Lux, Roland; Trapp, Matthias; Semmo, Amir; Döllner, Jürgen; Silvester Czanner and Wen Tang
    This paper presents a novel interactive rendering technique for creating and editing shadings for man-made objects in technical 3D visualizations. In contrast to shading approaches that use intensities computed based on surface normals (e.g., Phong, Gooch, Toon shading), the presented approach uses one-dimensional gradient textures, which can be parametrized and interactively manipulated based on per-object bounding volume approximations. The fully hardware-accelerated rendering technique is based on projective texture mapping and customizable intensity transfer functions. A provided performance evaluation shows comparable results to traditional normal-based shading approaches. The work also introduce simple direct-manipulation metaphors that enables interactive user control of the gradient texture alignment and intensity transfer functions.
  • Item
    Extracting Microfacet-based BRDF Parameters from Arbitrary Materials with Power Iterations
    (The Eurographics Association and John Wiley & Sons Ltd., 2015) Dupuy, Jonathan; Heitz, Eric; Iehl, Jean-Claude; Poulin, Pierre; Ostromoukhov, Victor; Jaakko Lehtinen and Derek Nowrouzezahrai
    We introduce a novel fitting procedure that takes as input an arbitrary material, possibly anisotropic, and automatically converts it to a microfacet BRDF. Our algorithm is based on the property that the distribution of microfacets may be retrieved by solving an eigenvector problem that is built solely from backscattering samples. We show that the eigenvector associated to the largest eigenvalue is always the only solution to this problem, and compute it using the power iteration method. This approach is straightforward to implement, much faster to compute, and considerably more robust than solutions based on nonlinear optimizations. In addition, we provide simple conversion procedures of our fits into both Beckmann and GGX roughness parameters, and discuss the advantages of microfacet slope space to make our fits editable. We apply our method to measured materials from two large databases that include anisotropic materials, and demonstrate the benefits of spatially varying roughness on texture mapped geometric models.
  • Item
    c-Space: Time-evolving 3D Models (4D) from Heterogeneous Distributed Video Sources
    (The Eurographics Association, 2016) Ritz, Martin; Knuth, Martin; Domajnko, Matevz; Posniak, Oliver; Santos, Pedro; Fellner, Dieter W.; Chiara Eva Catalano and Livio De Luca
    We introduce c-Space, an approach to automated 4D reconstruction of dynamic real world scenes, represented as time-evolving 3D geometry streams, available to everyone. Our novel technique solves the problem of fusing all sources, asynchronously captured from multiple heterogeneous mobile devices around a dynamic scene at a real word location. To this end all captured input is broken down into a massive unordered frame set, sorting the frames along a common time axis, and finally discretizing the ordered frame set into a time-sequence of frame subsets, each subject to photogrammetric 3D reconstruction. The result is a time line of 3D models, each representing a snapshot of the scene evolution in 3D at a specific point in time. Just like a movie is a concatenation of time-discrete frames, representing the evolution of a scene in 2D, the 4D frames reconstructed by c-Space line up to form the captured and dynamically changing 3D geometry of an event over time, thus enabling the user to interact with it in the very same way as with a static 3D model. We do image analysis to automatically maximize the quality of results in the presence of challenging, heterogeneous and asynchronous input sources exhibiting a wide quality spectrum. In addition we show how this technique can be integrated as a 4D reconstruction web service module, available to mobile end-users.
  • Item
    Real-time Inextensible Hair with Volume and Shape
    (The Eurographics Association, 2015) Sánchez-Banderas, Rosa María; Barreiro, Héctor; García-Fernández, Ignacio; Pérez, Mariano; Mateu Sbert and Jorge Lopez-Moreno
    Hair simulation is a common topic extensively studied in computer graphics. One of the many challenges in this field is simulating realistic hair in a real-time environment. In this paper, we propose a unified simulation scheme to consider three of the key features in hair simulation; inextensibility, shape preservation and hair-hair interaction. We use an extension to the Dynamic Follow the Leader (DFTL) method to include shape preservation. Our implementation is also coupled with a Lagrangian approach to address the hair-hair interaction dynamics. A GPU-friendly scheme is proposed that is able to exploit the massive parallelism these devices offer, being able to simulate thousands of strands in real-time. The method has been integrated in a game development platform with a shading model for rendering and several test applications have been developed using this implementation.
  • Item
    Stereo from Shading
    (The Eurographics Association, 2015) Chapiro, Alexandre; O'Sullivan, Carol; Jarosz, Wojciech; Gross, Markus; Smolic, Aljoscha; Jaakko Lehtinen and Derek Nowrouzezahrai
    We present a new method for creating and enhancing the stereoscopic 3D (S3D) sensation without using the parallax disparity between an image pair. S3D relies on a combination of cues to generate a feeling of depth, but only a few of these cues can easily be modified within a rendering pipeline without significantly changing the content. We explore one such cue-shading stereopsis-which to date has not been exploited for 3D rendering. By changing only the shading of objects between the left and right eye renders, we generate a noticeable increase in perceived depth. This effect can be used to create depth when applied to flat images, and to enhance depth when applied to shallow depth S3D images. Our method modifies the shading normals of objects or materials, such that it can be flexibly and selectively applied in complex scenes with arbitrary numbers and types of lights and indirect illumination. Our results show examples of rendered stills and video, as well as live action footage.
  • Item
    Virtual Spherical Gaussian Lights for Real-time Glossy Indirect Illumination
    (The Eurographics Association and John Wiley & Sons Ltd., 2015) Tokuyoshi, Yusuke; Stam, Jos and Mitra, Niloy J. and Xu, Kun
    Virtual point lights (VPLs) are well established for real-time global illumination. However, this method suffers from spiky artifacts and flickering caused by singularities of VPLs, highly glossy materials, high-frequency textures, and discontinuous geometries. To avoid these artifacts, this paper introduces a virtual spherical Gaussian light (VSGL) which roughly represents a set of VPLs. For a VSGL, the total radiant intensity and positional distribution of VPLs are approximated using spherical Gaussians and a Gaussian distribution, respectively. Since this approximation can be computed using summations of VPL parameters, VSGLs can be dynamically generated using mipmapped reflective shadow maps. Our VSGL generation is simple and independent from any scene geometries. In addition, reflected radiance for a VSGL is calculated using an analytic formula. Hence, we are able to render one-bounce glossy interreflections at real-time frame rates with smaller artifacts.
  • Item
    Example-based Interpolation and Synthesis of Bidirectional Texture Functions
    (The Eurographics Association and Blackwell Publishing Ltd., 2013) Ruiters, Roland; Schwartz, Christopher; Klein, Reinhard; I. Navazo, P. Poulin
    Bidirectional Texture Functions (BTF) have proven to be a well-suited representation for the reproduction of measured real-world surface appearance and provide a high degree of realism. We present an approach for designing novel materials by interpolating between several measured BTFs. For this purpose, we transfer concepts from existing texture interpolation methods to the much more complex case of material interpolation. We employ a separation of the BTF into a heightmap and a parallax compensated BTF to cope with problems induced by parallax, masking and shadowing within the material. By working only on the factorized representation of the parallax compensated BTF and the heightmap, it is possible to efficiently perform the material interpolation. By this novel method to mix existing BTFs, we are able to design plausible and realistic intermediate materials for a large range of different opaque material classes. Furthermore, it allows for the synthesis of tileable and seamless BTFs and finally even the generation of gradually changing materials following user specified material distribution maps.
  • Item
    Interactive Low-Cost Wind Simulation For Cities
    (The Eurographics Association, 2016) Rando, Eduard; Muñoz, Imanol; Patow, Gustavo; Vincent Tourre and Filip Biljecki
    Wind is an ubiquitous phenomenon on earth, and its behavior is well studied in many fields. However, its study inside a urban landscape remains an elusive target for large areas given the high complexity of the interactions between wind and buildings. In this paper we propose a lightweight 2D wind simulation in cities that is efficient enough to run at interactive frame-rates, but also accurate enough to provide some prediction capabilities. The proposed algorithm is based on the Lattice-Boltzmann Method (LBM), which consists of a regular lattice that represents the fluid in discrete locations, and a set of equations to simulate its flow. We perform all the computations of the LBM in CUDA on graphics processors for accelerating the calculations.
  • Item
    Multi-Domain Real-time Planning in Dynamic Environments
    (ACM SIGGRAPH / Eurographics Association, 2013) Kapadia, Mubbasir; Beacco, Alejandro; Garcia, Francisco; Reddy, Vivek; Pelechano, Nuria; Badler, Norman I.; Theodore Kim and Robert Sumner
    This paper presents a real-time planning framework for multicharacter navigation that enables the use of multiple heterogeneous problem domains of differing complexities for navigation in large, complex, dynamic virtual environments. The original navigation problem is decomposed into a set of smaller problems that are distributed across planning tasks working in these different domains. An anytime dynamic planner is used to efficiently compute and repair plans for each of these tasks, while using plans in one domain to focus and accelerate searches in more complex domains. We demonstrate the benefits of our framework by solving many challenging multi-agent scenarios in complex dynamic environments requiring space-time precision and explicit coordination between interacting agents, by accounting for dynamic information at all stages of the decision-making process.