EG2003
Permanent URI for this community
Browse
Browsing EG2003 by Issue Date
Now showing 1 - 20 of 113
Results Per Page
Sort Options
Item Freeform Shape Representations for Efficient Geometry Processing(Eurographics Association, 2003) Kobbelt, LeifThe most important concepts for the handling and storage of freeform shapes in geometry processing applications are parametric representations and volumetric representations. Both have their specific advantages and drawbacks. While the algebraic complexity of volumetric representations is independent from the shape complexity, the domain of a parametric representation usually has to have the same structure as the surface itself (which sometimes makes it necessary to update the domain when the surface is modified). On the other hand, the topology of a parametrically defined surface can be controlled explicitly while in a volumetric representation, the surface topology can change accidentally during deformation. A volumetric representation reduces distance queries or inside/outside tests to mere function evaluations but the geodesic neighborhood relation between surface points is difficult to resolve. As a consequence, it seems promising to combine parametric and volumetric representations to effectively exploit both advantages. In this talk, a number of projects are presented and discussed in which such a combination leads to efficient and numerically stable algorithms for the solution of various geometry processing tasks. Applications include global error control for mesh decimation and smoothing, topology control for level-set surfaces, and shape modeling with unstructured point clouds.Item Open Issues in Photo-realistic Rendering(Eurographics Association, 2003) Purgathofer, WernerFor more than two decades Computer graphics researchers have tried to achieve photo-realism in their images as reliable as possible, mainly by simulating the physical laws of light and adding one effect after the other. The recent years have brought a change of efforts towards real-time methods, easy-to-use systems, integration with vision, modelling tools and the like. The quality of images is mostly accepted as sufficient for real world applications, but where are we really? There are still numerous problems to be solved, and there is notable progress in these areas. No question, the plug-in philosophy of some commercial products has enabled several of these new techniques to be distributed quite fast. But unfortunately, many other of these developments happen in isolated systems for the pure purpose of publication, and never make it into commercial software. This presentation wants to make people more aware of such activities, and evaluate the steps we still have to go towards perfect photo-realism. The talk will start with an attempt to give a brief overview of the rendering history, highlighting the main research directions at different times. It will explain the driving forces of the developments, which are complexity, speed, and accuracy, and maybe also expression in recent years. Solved and unsolved areas are examined, and compared to practically solved but theoretically incomplete topics such as translucency, tone mapping, light source and BTF descriptions, and error metrics for image quality evaluation. The difference lies mainly in the difference between believable, correct, and predictive images. Also, for really realistic images modelling complexity is still an issue. Finally, some recent work on polarization and fluorescence is presented.Item Virtual Modelling(Eurographics Association, 2003) Kiss, Szilárd; Nijholt, Anton; Zwiers, JobWe concentrate our efforts on building virtual modelling environments where the content creator uses controls (widgets) as an interactive adjustment modality for the properties of the edited objects. Besides the advantage of being an on-line modelling approach (visualised just like any other on-line VRML content), our approach provides also instant visualisation and an intuitive, graphical editing modality. Although visual modelling environments are more powerful in content creation than a text editor, our aim is to include certain domain knowledge to extend their capabilities. Our most recent system is a 3D character modeller capable of handling H-anim or other types of hierarchies (based also on H-anim components) and the geometry attached to them. The modelling is based on the properties of regular grid geometries, on H-anim hierarchies and where applicable, on 3D character symmetry. The novelty of our approach consists of 1) natural graphics components; 2) integrating the interface elements into the virtual environment itself; 3) rule based modelling of 3D characters; 4) on-line modelling.Item Photorealistic Augmented Reality(Eurographics Association, 2003) Gibson, Simon; Chalmers, AlanAugmenting real-world images with synthetic objects is becoming of increasing importance in both research and commercial applications, and encompasses aspects of fields such as mobile camera and display technology, computer graphics, image processing, computer vision and human perception. This tutorial presents an in-depth study into the techniques required to produce high fidelity augmented images at interactive rates, and will consider how the realism of the resulting images can be assessed and their fidelity quantified. The first half of the tutorial covers the methods we use to generate augmented images. We will show how commonly available digital cameras can be used to record scene data, and how computer graphics hardware can be used to generate visually realistic augmented images at interactive rates. Specific topics covered will include geometric and radiometric camera calibration, image-based reconstruction of scene geometry and illumination, hardware accelerated rendering of synthetic objects and shadows, and image compositing. The second half of the tutorial discusses in more detail what we are trying to achieve when generating augmented images, and how success can be measured and quantified. Methods for displaying augmented images will be discussed, and techniques for conducting psychophysical experiments to evaluating the visual quality of images will also be covered. Examples of augmented images and video sequences from a real-world interactive interior design application will be shown, and used to illustrate the different ideas and techniques introduced throughout the tutorial.Item Curve Synthesis from Learned Refinement Models(Eurographics Association, 2003) Simhon, Saul; Dudek, GregoryWe present a method for generating refined 2D illustrations from hand drawn outlines consisting of only curve strokes. The system controllably synthesizes novel illustrations by augmenting the hand drawn curves’ shape, thickness, color and surrounding texture. These refinements are learned from a training ensemble. Users can select several examples that depict the desired idealized look and train the system for that type of refinement. Further, given several types of refinements, our system automatically recognizes the curve and applies the appropriate refinement. Recognition is accomplished by evaluating the likelihood the curve belongs to a particular class based on both its shape and context in the illustration.Item Interactive Learning of Computer Graphics Algorithms(Eurographics Association, 2003) Pan, Zhigeng; Lun, Hung Pak; Gao, RongNowadays, computer graphics course is usually taught with traditional teaching methodologies and tools, which has its own limitations. In this paper, we present an online interactive computer graphics tutorial. The practical part of the tutorial consists of example programs on the computer in which the users can test the theoretical concepts and generate their own experiences. The integration of the different theoretical and practical parts of the course is realized through a common Web-based interface to the system. The main objective is to allow the users to learn and practice with the algorithms of computer graphics. Two versions of the system in Chinese and English language are implemented.Item Estimating Positions and Radiances of a Small Number of Light Sources for Real-Time Image-Based Lighting(Eurographics Association, 2003) Madsen, Claus B.; Sorensen, Mads K. D.; Vittrup, MichaelImage-based lighting (IBL) of virtual objects has become a popular approach to blending virtual and real scenes. In IBL an omni-directional image of a scene is used as the illumination environment for rendering virtual objects. Typically, this rendering is based on global illumination techniques which are far from capable of real-time performance. In this paper we describe how to estimate the positions and radiances of a small number of point light sources, e.g., on the order of 5 to 10, which will produce virtual object appearances which are consistent with those obtained using IBL. The estimated light source parameters can be used directly in OpenGL rendering for real-time performance. We demonstrate the approach on natural scenes.Item Advanced Shading Techniques(Eurographics Association, 2003) Diepstraten, JoachimFocus of this talk: • Per-pixel point light Blinn-Phong lighting • Per-pixel realistic metal-BRDF • Per-pixel anisotropic lighting • Procedural textures • Different reflection/environment mapping techniques • “Faked“ translucencyItem Combining 3D Scans and Motion Capture for Realistic Facial Animation(Eurographics Association, 2003) Breidt, Martin; Wallraven, Christian; Cunningham, Douglas W.; Buelthoff, Heinrich H.We present ongoing work on the development of new methods for highly realistic facial animation. One of the main contributions is the use of real-world, high-precision data for both the timing of the animation and the deformation of the face geometry. For animation, a set of morph shapes acquired through a 3D scanner is linearly morphed according to timing extracted from point tracking data recorded with an optical motion capture system.Item Shadow Mapping Based on Dual Depth Layers(Eurographics Association, 2003) Weiskopf, Daniel; Ertl, ThomasShadow maps are a widely used means for the generation of shadows although they exhibit aliasing artifacts and problems of numerical precision. In this paper we extend the concept of a single shadow map by introducing dual shadow maps, which are based on the two depth layers that are closest to the light source. Our shadow algorithm takes into account these two depth values and computes an adaptive depth bias to achieve a robust determination of shadowed regions. Dual depth mapping only modifies the construction of the shadow map and can therefore be combined with other extensions such as filtering, perspective shadow maps, or adaptive shadow maps. Our approach can be mapped to graphics hardware for interactive applications and can also be used in high-quality software renderers.Item Image-based Extraction of Material Reflectance Properties of a 3D Rigid Object(Eurographics Association, 2003) Erdem, M. Erkut; Erdem, I. Aykut; Atalay, VolkanIn this study, we concentrate on the extraction of reflectance properties of a 3D rigid object from its 2D images and the other aim of this work is rendering the object in real-time with photorealistic quality in varying illumination conditions. The reflectance property of the object is decomposed in diffuse and specular components. While the diffuse components are stored in a global texture, the specularity of the object is represented by a single Bidirectional Reflectance Distribution Function (BRDF). In the rendering phase, these two components are combined to obtain the behavior of the real surface property of the object.Item Trans-Polygon Stroke Method for Frame Coherent Pastel Images(Eurographics Association, 2003) Murakami, Kyoko; Tsuruno, ReijiWe propose Trans-Polygon Stroke Method (TPSM) for creating pastel-like animation that keeps frame-to-frame coherence. There are several variable factors in hand-drawn pastel such as paper roughness and pigments. When these factors are simulated in computer graphics animation, they cause flickers, which reduces visibility. To increase the visibility of pastel-like animation by reducing the flickers, it is need to fix the factors. The procedure of the TPSM is to (1) model objects with quadrilateral polygons, (2) generate particles on the polygons, (3) give a vector direction to each particle, and (4) draw a line from the particle to the n-th polygon along with the direction. Besides the TPSM, the amount of pigments used for one stroke is read from a given table to fix this variable factor. We demonstrate several types of drawn strokes, such as hatching, stipple, and blending, using the proposed method.Item A New 3D Spring for Deformable Object Animation(Eurographics Association, 2003) Jeong, Il-Kwon; Lee, InhoA new3D spring for deformable object animation is proposed. Currently mass-spring system is most widely used in a deformable object animation as well as cloth animation. In order to perform a realistic cloth animation using the conventional mass-spring system, one requires three kinds of springs, that is, structural spring, shear spring, and bend spring. Performing a deformable object animation of a given geometric model also requires user’s knowledge or trial-and-error on constructing a stable spring network. For example, a cube model constructed from the edges used in faces for rendering would collapse, and one should use additional springs connecting the vertices inside the cube in order to make a stable spring model for simulation. Using the proposed 3D spring all this tedious and complicated modeling procedure can be omitted and one can easily construct a stable mass and 3D spring model from a geometric model with arbitrary shape. In addition to that, our 3D spring makes it possible to animate sharp folds of a very thin object easily and efficiently without any additional geometrical procedure.Item Automatic High Level Avatar Guidance Based on Affordance of Movement(Eurographics Association, 2003) Michael, Despina; Chrysanthou, YiorgosAs virtual cities become ever more common and more extensive, the need to populate them with virtual pedestrians grows. One of the problems to be resolved for the virtual population is the behaviour simulation. Currently specifying the behaviour requires a lot of laborious work. In this paper we propose a method for automatically deriving the high level behaviour of the avatars. We introduce to the Graphics community a new method adapted from ideas recently presented in the Architecture literature. In this method, the general avatar movements are derived from an analysis of the structure of the architectural model. The analysis tries to encode Gibsons 7 principle of affordance, interpreted here as: pedestrians are more attracted towards directions with greater available walkable surface. We have implemented and tested the idea in a 2x2 km2 model of the city of Nicosia. Initial results indicate that the method, although simple, can automatically and efficiently populate the model with realistic results.Item Item High quality images from 2.5D video(Eurographics Association, 2003) Berretty, Robert-Paul; Ernst, FabianGiven a 2D video stream with an accompanying depth channel, we render high quality images from viewpoints close to the original one. This is for instance required to generate a 3D impression on stereoscopic or multiview screens. We propose a technique for video based rendering that supports higher order video filtering. We focus on screens that support horizontal parallax. We can optionally incorporate rendering of a so called hidden layer that contains data of parts of the scene that are hidden from the original viewpoint. We are able to render high quality images at only the added cost of the video filtering.Item Parallel Visibility Test and Occlusion Culling in Avango Virtual Environment Framework(Eurographics Association, 2003) Klimenko, Stanislav; Nikitina, Lialia; Nikitin, IgorReal-time visibility test is an attractive feature for Virtual Environment (VE) applications where large datasets should be interactively explored. In such scenes most of the objects are usually occluded by other ones, and their omission from rendering can significantly accelerate the graphical performance. In this paper we present our novel approach for occlusion culling and its implementation in VE system Avango1. Visibility test module performs real-time update of the list of visible objects by means of color labeling and either hardware or software histogramming of corresponding color buffer. Exploiting the distribution capabilities of Avango, the main draw and visibility test processes are parallelized to different computers running Irix or Linux. Being applied to an architectural model containing 260,000 textured triangles, our method accelerates the graphical performance from 8 to 32 stereoimages/sec.Item Point-Based Computer Graphics(Eurographics Association, 2003) Alexa, Marc; Dachsbacher, Carsten; Gross, Markus; Pauly, Mark; van Baar, Jeroen; Zwicker, Matthias-Item Item Obscurances for ray-tracing(Eurographics Association, 2003) Mendez, A.; Sbert, M.; Neumann, L.We present a powerful method to create realistic-looking pictures of scenes with objects that have diffuse and non-diffuse properties. The method recreates the obscurances technique, introduced some years ago, with a new approach based on ray-tracing. The first version of the obscurances technique was used only for diffuse environments. It is already working successfully in some widely known video games because it is able to avoid the high cost of radiosity techniques in the interactive and real-time game environments. Soft shadows, nice colour reflection effects, visually pleasant rendering of corners and other partly occluded surfaces of the scene are reproduced at a small fraction of the cost of radiosity. We extend here obscurances to handle non-diffuse environments via ray-tracing. Instead of computing the obscurance of every patch in the scene, we will compute view-dependent obscurances. The direct illumination is separately handled and diffuse color albedo functions are used for obscurance computation. Ray-traced obscurances can be useful in animation, both in the editing phase and/or in final images, and in ray-traced games, when the use of future graphics cards will decrease dramatically the cost of tracing a ray.