26-Issue 3
Permanent URI for this collection
Browse
Browsing 26-Issue 3 by Issue Date
Now showing 1 - 20 of 53
Results Per Page
Sort Options
Item 3D Lip-Synch Generation with Data-Faithful Machine Learning(The Eurographics Association and Blackwell Publishing Ltd, 2007) Kim, Ig-Jae; Ko, Hyeong-SeokThis paper proposes a new technique for generating three-dimensional speech animation. The proposed technique takes advantage of both data-driven and machine learning approaches. It seeks to utilize the most relevant part of the captured utterances for the synthesis of input phoneme sequences. If highly relevant data are missing or lacking, then it utilizes less relevant (but more abundant) data and relies more heavily on machine learning for the lip-synch generation. This hybrid approach produces results that are more faithful to real data than conventional machine learning approaches, while being better able to handle incompleteness or redundancy in the database than conventional data-driven approaches. Experimental results, obtained by applying the proposed technique to the utterance of various words and phrases, show that (1) the proposed technique generates lip-synchs of different qualities depending on the availability of the data, and (2) the new technique produces more realistic results than conventional machine learning approaches.Item Layered Performance Animation with Correlation Maps(The Eurographics Association and Blackwell Publishing Ltd, 2007) Neff, Michael; Albrecht, Irene; Seidel, Hans-PeterPerformance has a spontaneity and aliveness that can be difficult to capture in more methodical animation processes such as keyframing. Access to performance animation has traditionally been limited to either low degree of freedom characters or required expensive hardware. We present a performance-based animation system for humanoid characters that requires no special hardware, relying only on mouse and keyboard input. We deal with the problem of controlling such a high degree of freedom model with low degree of freedom input through the use of correlation maps which employ 2D mouse input to modify a set of expressively relevant character parameters. Control can be continuously varied by rapidly switching between these maps. We present flexible techniques for varying and combining these maps and a simple process for defining them. The tool is highly configurable, presenting suitable defaults for novices and supporting a high degree of customization and control for experts. Animation can be recorded on a single pass, or multiple layers can be used to increase detail. Results from a user study indicate that novices are able to produce reasonable animations within their first hour of using the system. We also show more complicated results for walking and a standing character that gestures and dances.Item Interactive Simulation of the Human Eye Depth of Field and Its Correction by Spectacle Lenses(The Eurographics Association and Blackwell Publishing Ltd, 2007) Kakimoto, Masanori; Tatsukawa, Tomoaki; Mukai, Yukiteru; Nishita, TomoyukiThis paper describes a fast rendering algorithm for verification of spectacle lens design. Our method simulates refraction corrections of astigmatism as well as myopia or presbyopia. Refraction and defocus are the main issues in the simulation. For refraction, our proposed method uses per-vertex basis ray tracing which warps the environment map and produces a real-time refracted image which is subjectively as good as ray tracing. Conventional defocus simulation was previously done by distribution ray tracing and a real-time solution was impossible. We introduce the concept of a blur field, which we use to displace every vertex according to its position. The blurring information is precomputed as a set of field values distributed to voxels which are formed by evenly subdividing the perspective projected space. The field values can be determined by tracing a wavefront from each voxel through the lens and the eye, and by evaluating the spread of light at the retina considering the best human accommodation effort. The blur field is stored as texture data and referred to by the vertex shader that displaces each vertex. With an interactive frame rate, blending the multiple rendering results produces a blurred image comparable to distribution ray tracing output.Item Real-time homogenous translucent material editing(The Eurographics Association and Blackwell Publishing Ltd, 2007) Xu, Kun; Gao, Yue; Li, Yong; Ju, Tao; Hu, Shi-MinThis paper presents a novel method for real-time homogenous translucent material editing under fixed illumination. We consider the complete analytic BSSRDF model proposed by Jensen et al. [JMLH01], including both multiple scattering and single scattering. Our method allows the user to adjust the analytic parameters of BSSRDF and provides high-quality, real-time rendering feedback. Inspired by recently developed Precomputed Radiance Transfer (PRT) techniques, we approximate both the multiple scattering diffuse reflectance function and the single scattering exponential attenuation function in the analytic model using basis functions, so that re-computing the outgoing radiance at each vertex as parameters change reduces to simple dot products. In addition, using a non-uniform piecewise polynomial basis, we are able to achieve smaller approximation error than using bases adopted in previous PRT-based works, such as spherical harmonics and wavelets. Using hardware acceleration, we demonstrate that our system generates images comparable to [JMLH01]at real-time frame-rates.Item Precomputed Radiance Transfer Field for Rendering Interreflections in Dynamic Scenes(The Eurographics Association and Blackwell Publishing Ltd, 2007) Pan, Minghao; Wang Xinguo Liu, Rui; Peng, Qunsheng; Bao, HujunIn this paper, we introduce a new representation - radiance transfer fields (RTF) - for rendering interreflections in dynamic scenes under low frequency illumination. The RTF describes the radiance transferred by an individual object to its surrounding space as a function of the incident radiance. An important property of RTF is its independence of the scene configuration, enabling interreflection computation in dynamic scenes. Secondly, RTFs naturally fit in with the rendering framework of precomputed shadow fields, incurring negligible cost to add interreflection effects. In addition, RTFs can be used to compute interreflections for both diffuse and glossy objects. We also show that RTF data can be highly compressed by clustered principal component analysis (CPCA), which not only reduces the memory cost but also accelerates rendering. Finally, we present some experimental results demonstrating our techniques.Item Omni-directional Relief Impostors(The Eurographics Association and Blackwell Publishing Ltd, 2007) Andujar, C.; Boo, J.; Brunet, P.; Fairen, M.; Navazo, I.; Vazquez, P.; Vinacua, A.Relief impostors have been proposed as a compact and high-quality representation for high-frequency detail in 3D models. In this paper we propose an algorithm to represent a complex object through the combination of a reduced set of relief maps. These relief maps can be rendered with very few artifacts and no apparent deformation from any view direction. We present an efficient algorithm to optimize the set of viewing planes supporting the relief maps, and an image-space metric to select a sufficient subset of relief maps for each view direction. Selected maps (typically three) are rendered based on the well-known ray-height-field intersection algorithm implemented on the GPU. We discuss several strategies to merge overlapping relief maps while minimizing sampling artifacts and to reduce extra texture requirements. We show that our representation can maintain the geometry and the silhouette of a large class of complex shapes with no limit in the viewing direction. Since the rendering cost is output sensitive, our representation can be used to build a hierarchical model of a 3D scene.Item Crowds by Example(The Eurographics Association and Blackwell Publishing Ltd, 2007) Lerner, Alon; Chrysanthou, Yiorgos; Lischinski, DaniWe present an example-based crowd simulation technique. Most crowd simulation techniques assume that the behavior exhibited by each person in the crowd can be defined by a restricted set of rules. This assumption limits the behavioral complexity of the simulated agents. By learning from real-world examples, our autonomous agents display complex natural behaviors that are often missing in crowd simulations. Examples are created from tracked video segments of real pedestrian crowds. During a simulation, autonomous agents search for examples that closely match the situation that they are facing. Trajectories taken by real people in similar situations, are copied to the simulated agents, resulting in seemingly natural behaviors.Item Shape-aware Volume Illustration(The Eurographics Association and Blackwell Publishing Ltd, 2007) Chen, Wei; Lu, Aidong; Ebert, David S.We introduce a novel volume illustration technique for regularly sampled volume datasets. The fundamental difference between previous volume illustration algorithms and ours is that our results are shape-aware, as they depend not only on the rendering styles, but also the shape styles. We propose a new data structure that is derived from the input volume and consists of a distance volume and a segmentation volume. The distance volume is used to reconstruct a continuous field around the object boundary, facilitating smooth illustrations of boundaries and silhouettes. The segmentation volume allows us to abstract or remove distracting details and noise, and apply different rendering styles to different objects and components. We also demonstrate how to modify the shape of illustrated objects using a new 2D curve analogy technique. This provides an interactive method for learning shape variations from 2D hand-painted illustrations by drawing several lines. Our experiments on several volume datasets demonstrate that the proposed approach can achieve visually appealing and shape-aware illustrations. The feedback from medical illustrators is quite encouraging.Item Geodesic-Controlled Developable Surfaces for Modeling Paper Bending(The Eurographics Association and Blackwell Publishing Ltd, 2007) Bo, Pengbo; Wang, WenpingWe present a novel and effective method for modeling a developable surface to simulate paper bending in interactive and animation applications. The method exploits the representation of a developable surface as the envelope of rectifying planes of a curve in 3D, which is therefore necessarily a geodesic on the surface. We manipulate the geodesic to provide intuitive shape control for modeling paper bending. Our method ensures a natural continuous isometric deformation from a piece of bent paper to its flat state without any stretching. Test examples show that the new scheme is fast, accurate, and easy to use, thus providing an effective approach to interactive paper bending. We also show how to handle non-convex piecewise smooth developable surfaces.Item Boundary Constrained Swept Surfaces for Modelling and Animation(The Eurographics Association and Blackwell Publishing Ltd, 2007) You, L. H.; Yang, X. S.; Pachulski, M.; Zhang, Jian J.Due to their simplicity and intuitiveness, swept surfaces are widely used in many surface modelling applications. In this paper, we present a versatile swept surface technique called the boundary constrained swept surfaces. The most distinct feature is its ability to satisfy boundary constraints, including the shape and tangent conditions at the boundaries of a swept surface. This permits significantly varying surfaces to be both modelled and smoothly assembled, leading to the construction of complex objects. The representation, similar to an ordinary swept surface, is analytical in nature and thus it is light in storage cost and numerically very stable to compute. We also introduce a number of useful shape manipulation tools, such as sculpting forces, to deform a surface both locally and globally. In addition to being a complementary method to the mainstream surface modelling and deformation techniques, we have found it very effective in automatically rebuilding existing complex models. Model reconstruction is arguably one of the most laborious and expensive tasks in modelling complex animated characters. We demonstrate how our technique can be used to automate this process.Item Soft Articulated Characters with Fast Contact Handling(The Eurographics Association and Blackwell Publishing Ltd, 2007) Galoppo, Nico; Otaduy, Miguel A.; Tekin, Serhat; Gross, Markus; Lin, Ming C.Fast contact handling of soft articulated characters is a computationally challenging problem, in part due to complex interplay between skeletal and surface deformation. We present a fast, novel algorithm based on a layered representation for articulated bodies that enables physically-plausible simulation of animated characters with a high-resolution deformable skin in real time. Our algorithm gracefully captures the dynamic skeleton-skin interplay through a novel formulation of elastic deformation in the pose space of the skinned surface. The algorithm also overcomes the computational challenges by robustly decoupling skeleton and skin computations using careful approximations of Schur complements, and efficiently performing collision queries by exploiting the layered representation. With this approach, we can simultaneously handle large contact areas, produce rich surface deformations, and capture the collision response of a character/s skeleton.Item Efficient Reflectance and Visibility Approximations for Environment Map Rendering(The Eurographics Association and Blackwell Publishing Ltd, 2007) Green, Paul; Kautz, Jan; Durand, FredoWe present a technique for approximating isotropic BRDFs and precomputed self-occlusion that enables accurate and efficient prefiltered environment map rendering. Our approach uses a nonlinear approximation of the BRDF as a weighted sum of isotropic Gaussian functions. Our representation requires a minimal amount of storage, can accurately represent BRDFs of arbitrary sharpness, and is above all, efficient to render. We precompute visibility due to self-occlusion and store a low-frequency approximation suitable for glossy reflections. We demonstrate our method by fitting our representation to measured BRDF data, yielding high visual quality at real-time frame rates.Item What can Computer Graphics expect from 3D Computer Vision?(The Eurographics Association and Blackwell Publishing Ltd, 2007) Sara, RadimComputer Vision is a discipline whose ultimate goal is to interpret optical images of real scenes. It is well understood that such a problem is cursed by ambiguity of interpretation and uncertainty of evidence. Despite imperfectness of results due to the scenes never following our prior models exactly, Computer Vision has achieved a significant progress in the past two decades.This talk will outline the quest of 3D Computer Vision by describing a processing pipeline that receives a heap of unorganized images from unknown cameras and produces a consistent 3D geometric model together with camera calibrations. We will see how new algorithms allow the standard conception of the pipeline as a series of independent processing steps gradually transform to a single complex, yet efficient vision task. We will identify some points where linking Computer Vision and Computer Graphics would bring significant progress.Item On-the-fly Curve-skeleton Computation for 3D Shapes(The Eurographics Association and Blackwell Publishing Ltd, 2007) Sharf, Andrei; Lewiner, Thomas; Shamir, Ariel; Kobbelt, LeifThe curve-skeleton of a 3D object is an abstract geometrical and topological representation of its 3D shape. It maps the spatial relation of geometrically meaningful parts to a graph structure. Each arc of this graph represents a part of the object with roughly constant diameter or thickness, and approximates its centerline. This makes the curve-skeleton suitable to describe and handle articulated objects such as characters for animation. We present an algorithm to extract such a skeleton on-the-fly, both from point clouds and polygonal meshes. The algorithm is based on a deformable model evolution that captures the object s volumetric shape. The deformable model involves multiple competing fronts which evolve inside the object in a coarse-to-fine manner. We first track these fronts centers, and then merge and filter the resulting arcs to obtain a curve-skeleton of the object. The process inherits the robustness of the reconstruction technique, being able to cope with noisy input, intricate geometry and complex topology. It creates a natural segmentation of the object and computes a center curve for each segment while maintaining a full correspondence between the skeleton and the boundary of the object.Item Image Dequantization: Restoration of Quantized Colors(The Eurographics Association and Blackwell Publishing Ltd, 2007) Kim, Tae-hoon; Ahn, Jongwoo; Choi, Min GyuColor quantization replaces the color of each pixel with the closest representative color, and thus it makes the resulting image partitioned into uniformly-colored regions. As a consequence, continuous, detailed variations of color over the corresponding regions in the original image are lost through color quantization. In this paper, we present a novel blind scheme for restoring such variations from a color-quantized input image without a priori knowledge of the quantization method. Our scheme identifies which pairs of uniformly-colored regions in the input image should have continuous variations of color in the resulting image. Then, such regions are seamlessly stitched through optimization while preserving the closest representative colors. The user can optionally indicate which regions should be separated or stitched by scribbling constraint brushes across the regions. We demonstrate the effectiveness of our approach through diverse examples, such as photographs, cartoons, and artistic illustrations.Item Context-Aware Skeletal Shape Deformation(The Eurographics Association and Blackwell Publishing Ltd, 2007) Weber, Ofir; Sorkine, Olga; Lipman, Yaron; Gotsman, CraigWe describe a system for the animation of a skeleton-controlled articulated object that preserves the fine geometric details of the object skin and conforms to the characteristic shapes of the object specified through a set of examples. The system provides the animator with an intuitive user interface and produces compelling results even when presented with a very small set of examples. In addition it is able to generalize well by extrapolating far beyond the examples.Item Online Motion Capture Marker Labeling for Multiple Interacting Articulated Targets(The Eurographics Association and Blackwell Publishing Ltd, 2007) Yu, Qian; Li, Qing; Deng, ZhigangIn this paper, we propose an online motion capture marker labeling approach for multiple interacting articulated targets. Given hundreds of unlabeled motion capture markers from multiple articulated targets that are interacting each other, our approach automatically labels these markers frame by frame, by fitting rigid bodies and exploiting trained structure and motion models. Advantages of our approach include: 1) our method is an online algorithm, which requires no user interaction once the algorithm starts. 2) Our method is more robust than traditional the closest point-based approaches by automatically imposing the structure and motion models. 3) Due to the use of the structure model which encodes the rigidity of each articulated body of captured targets, our method can recover missing markers robustly. Our approach is efficient and particularly suited for online computer animation and video game applications.Item Style Transfer Functions for Illustrative Volume Rendering(The Eurographics Association and Blackwell Publishing Ltd, 2007) Bruckner, S.; Groeller, M. E.Illustrative volume visualization frequently employs non-photorealistic rendering techniques to enhance important features or to suppress unwanted details. However, it is difficult to integrate multiple non-photorealistic rendering approaches into a single framework due to great differences in the individual methods and their parameters. In this paper, we present the concept of style transfer functions. Our approach enables flexible data-driven illumination which goes beyond using the transfer function to just assign colors and opacities. An image-based lighting model uses sphere maps to represent non-photorealistic rendering styles. Style transfer functions allow us to combine a multitude of different shading styles in a single rendering. We extend this concept with a technique for curvature-controlled style contours and an illustrative transparency model. Our implementation of the presented methods allows interactive generation of high-quality volumetric illustrations.Item Skeleton-based Variational Mesh Deformations(The Eurographics Association and Blackwell Publishing Ltd, 2007) Yoshizawa, Shin; Belyaev, Alexander; Seidel, Hans-PeterIn this paper, a new free-form shape deformation approach is proposed. We combine a skeleton-based mesh deformation technique with discrete differential coordinates in order to create natural-looking global shape deformations. Given a triangle mesh, we first extract a skeletal mesh, a two-sided Voronoibased approximation of the medial axis. Next the skeletal mesh is modified by free-form deformations. Then a desired global shape deformation is obtained by reconstructing the shape corresponding to the deformed skeletal mesh. The reconstruction is based on using discrete differential coordinates. Our method preserves fine geometric details and original shape thickness because of using discrete differential coordinates and skeleton-based deformations. We also develop a new mesh evolution technique which allow us to eliminate possible global and local self-intersections of the deformed mesh while preserving fine geometric details. Finally, we present a multi-resolution version of our approach in order to simplify and accelerate the deformation process. In addition, interesting links between the proposed free-form shape deformation technique and classical and modern results in the differential geometry of sphere congruences are established and discussed.Item Ray-Casted BlockMaps for Large Urban Models Visualization(The Eurographics Association and Blackwell Publishing Ltd, 2007) Cignoni, P.; Di Benedetto, M.; Ganovelli, F.; Gobbetti, E.; Marton, F.; Scopigno, R.We introduce a GPU-friendly technique that efficiently exploits the highly structured nature of urban environments to ensure rendering quality and interactive performance of city exploration tasks. Central to our approach is a novel discrete representation, called BlockMap, for the efficient encoding and rendering of a small set of textured buildings far from the viewer. A BlockMap compactly represents a set of textured vertical prisms with a bounded on-screen footprint. BlockMaps are stored into small fixed size texture chunks and efficiently rendered through GPU raycasting. Blockmaps can be seamlessly integrated into hierarchical data structures for interactive rendering of large textured urban models. We illustrate an efficient output-sensitive framework in which a visibility-aware traversal of the hierarchy renders components close to the viewer with textured polygons and employs BlockMaps for far away geometry. Our approach provides a bounded size far distance representation of cities, naturally scales with the improving shader technology, and outperforms current state of the art approaches. Its efficiency and generality is demonstrated with the interactive exploration of a large textured model of the city of Paris on a commodity graphics platform.
- «
- 1 (current)
- 2
- 3
- »