12 results
Search Results
Now showing 1 - 10 of 12
Item Real-Time Bump Map Synthesis(The Eurographics Association, 2001) Kautz, Jan; Heidrich, Wolfgang; Seidel, Hans-Peter; Kurt Akeley and Ulrich NeumannIn this paper we present a method that automatically synthesizes bump maps at arbitrary levels of detail in real-time. The only input data we require is a normal density function; the bump map is generated according to that function. It is also used to shade the generated bump map. The technique allows to infinitely zoom into the surface, because more (consistent) detail can be created on the fly. The shading of such a surface is consistent when displayed at different distances to the viewer (assuming that the surface structure is self-similar). The bump map generation and the shading algorithm can also be used separately.Item Real-Time Capture, Reconstruction and Insertion into Virtual World of Human Actors(The Eurographics Association, 2003) Hasenfratz, J.M.; Lapierre, M.; Gascuel, J.-D.; Boyer, E.; Peter Hall and Philip WillisIn this paper, we show how to capture an actor with no intrusive trackers and without any special environment like blue set, how to estimate its 3D-geometry and how to insert this geometry into a virtual world in real-time. We use several cameras in conjunction with background subtraction to produce silhouettes of the actor as observed from the different camera viewpoints. These silhouettes allow the 3D-geometry of the actor to be estimated by a voxel based method. This geometry is rendered with a marching cube algorithm and inserted into a virtual world. Shadows of the actor corresponding to virtual lights are then added and interactions with objects of the virtual world are proposed. The main originality of this paper is to propose a complete pipeline that can computes up to 30 frames per second. Since the rapidity of the process depends mainly on its slowest step, we present here all these steps. For each of them, we present and discuss the solution that is used. Some of them are new solutions, as the 3D shape estimation which is achieved using graphics hardware. Results are presented and discussed.Item Regularised Anisotropic Nonlinear Diffusion for Rendering Refraction in Volume Graphics(The Eurographics Association, 2005) Rodgman, David; Chen, Min; Mike ChantlerRendering refraction in volume graphics requires smoothly distributed normals to synthesise good quality visual representations. Such refractive visualisation is more susceptible to noise in the data than visualisations that do not involve refraction. In this paper, we addresses the need for improving the continuity of voxel gradients in discretely sampled volume datasets using nonlinear diffusion methods, which was originally developed for image denoising. We consider the necessity for minimising unnecessary geometrical distortion, detail the functional specification of a volumetric filter for regularised anisotropic nonlinear diffusion (R-ANLD), discuss the further improvements of the filter, and compare the efficacy of the filter with an anisotropic nonlinear diffusion (ANLD) filter as well as a Gaussian filter and a linear diffusion filter. Our results indicate that it is possible to make significant improvements in image quality in refractive rendering without excessive distortion.Item Interactive Rendering of Atmospheric Scattering Effects Using Graphics Hardware(The Eurographics Association, 2002) Dobashi, Yoshinori; Yamamoto, Tsuyoshi; Nishita, Tomoyuki; Thomas Ertl and Wolfgang Heidrich and Michael DoggettTo create realistic images using computer graphics, an important element to consider is atmospheric scattering, that is, the phenomenon by which light is scattered by small particles in the air. This effect is the cause of the light beams produced by spotlights, shafts of light, foggy scenes, the bluish appearance of the earth s atmosphere, and so on. This paper proposes a fast method for rendering the atmospheric scattering effects based on actual physical phenomena. In the proposed method, look-up tables are prepared to store the intensities of the scattered light, and these are then used as textures. Realistic images are then created at interactive rates by making use of graphics hardware.Item Adaptive Texture Maps(The Eurographics Association, 2002) Kraus, Martin; Ertl, Thomas; Thomas Ertl and Wolfgang Heidrich and Michael DoggettWe introduce several new variants of hardware-based adaptive texture maps and present applications in two, three, and four dimensions. In particular, we discuss representations of images and volumes with locally adaptive resolution, lossless compression of light fields, and vector quantization of volume data. All corresponding texture decoders were successfully integrated into the programmable texturing pipeline of commercial off-the-shelf graphics hardware.Item Extending Natural Textures with Multi-Scale Synthesis(The Eurographics Association, 2003) Stahlhut, O.; Peter Hall and Philip WillisThis paper presents a texture synthesis algorithm that was designed for the tile-less generation of large images of arbitrary size from small sample images. The synthesised texture shows features that are visually similar to the sample over a wide frequency range. The development of the algorithm aimed at achieving high quality results for a large range of natural textures, incorporation of the original samples in the synthesis product, ease of use and good texturing speed even with input sample data two magnitudes larger than used by previous techniques. Like other algorithms we utilise an implicit texture model by copying arbitrary shaped texture patches from the sample to the destination over a multi-scale image pyramid. Our method combines the advantages of different previous techniques with respect to quality. A mixture of exhaustive searching, massive parallel computing and the well-known LBG-algorithm ensures a good balance between texturing quality and speed.Item High-Quality Pre-lntegrated Volume Rendering(The Eurographics Association, 2001) Engel, Klaus; Kraus, Martin; Ertl, Thomas; Kurt Akeley and Ulrich NeumannWe introduce a novel texture-based volume rendering approach that achieves the image quality of the best post-shading approaches with far less slices. It is suitable for new flexible consumer graphics hardware and provides high image quality even for low-resolution volume data and nonlinear transfer functions with high frequencies, without the performance overhead caused by rendering additional interpolated slices. This is especially useful for volumetric effects in computer games and professional scientific volume visualization, which heavily depend on memory bandwidth and rasterization power. We present an implementation of the algorithm on current programmable consumer graphics hardware using multi-textues with advanced texture fetch and pixel shading operations. We implemented direct volume rendering, volume shading, arbitrary number of isosurfaces, and mixed moder endering. The performance does neither depend on the number of isosurfaces nor the definition of the transfer functions, and is therefore suited for interactive highquality volume graphics.Item Silhouette Maps for Improved Texture Magnification(The Eurographics Association, 2004) Sen, Pradeep; Tomas Akenine-Moeller and Michael McCoolTexture mapping is a simple way of increasing visual realism without adding geometrical complexity. Because it is a discrete process, it is important to properly filter samples when the sampling rate of the texture differs from that of the final image. This is particularly problematic when the texture is magnified or minified. While reasonable approaches exist to tackle the minified case, few options exist for improving the quality of magnified textures in real-time applications. Most simply bilinearly interpolate between samples, yielding exceedingly blurry textures. In this paper, we address the real-time magnification problem by extending the silhouette map algorithm to general texturing. In particular, we discuss the creation of these silmap textures as well as a simple filtering scheme that allows for viewing at all levels of magnification. The technique was implemented on current graphics hardware and our results show that we can achieve a level of visual quality comparable to that of a much larger texture.Item A Quadrilateral Rendering Primitive(The Eurographics Association, 2004) Hormann, Kai; Tarini, Marco; Tomas Akenine-Moeller and Michael McCoolThe only surface primitives that are supported by common graphics hardware are triangles and more complex shapes have to be triangulated before being sent to the rasterizer. Even quadrilaterals, which are frequently used in many applications, are rendered as a pair of triangles after splitting them along either diagonal. This creates an undesirable C1-discontinuity that is visible in the shading or texture signal. We propose a new method that overcomes this drawback and is designed to be implemented in hardware as a new rasterizer. It processes a potentially non-planar quadrilateral directly without any splitting and interpolates attributes smoothly inside the quadrilateral. This interpolation is based on a recent generalization of barycentric coordinates that we adapted to handle perspective correction and situations in which a quadrilateral is partially behind the point of view.Item Towards a new Camera Model for X3D(The Eurographics Association, 2009) Jung, Yvonne; Behr, Johannes; Dieter W. Fellner and Alexei Sourin and Johannes Behr and Krzysztof WalczakCreating and setting the right parameters for the virtual camera is crucial for any content creation process. However, this is not easy since most current camera models, including the X3D Viewpoint, use a 3D position and orientation in 3D space to define the final visualized image. People use authoring tools or simple interactive navigation methods (e.g. "lookAt" or "showAll") to ease the process but at the end they still move a 6D (translation and rotation) camera beacon to get the final image. We thus propose a new X3D camera model, the CinematographicViewpoint node, which does not force the content creator to move the camera but allows the author to directly define what objects he would like to see on the screen. We borrow established techniques from the film area (e.g. rule of thirds and line of action) that allow defining objects and object-relations, which the camera model will use to automatically calculate the final transformation in space. The new camera model includes additionally a model for global visual effects (e.g. motion blur and depth of field), which allows incorporating classical film effects to real-time scenes. Both approaches combined allow content creators building visual results and camera movements that are closer to traditional filming much easier. The proposed approach also supports automatic camera movements that are bound to interactive content, which has not been possible before.