2002_EGWR02: 13th Eurographics Workshop on Rendering
Permanent URI for this collection
Browse
Browsing 2002_EGWR02: 13th Eurographics Workshop on Rendering by Issue Date
Now showing 1 - 20 of 29
Results Per Page
Sort Options
Item Enhancing and Optimizing the Render Cache(The Eurographics Association, 2002) Walter, Bruce; Drettakis, George; Greenberg, Donald P.; P. Debevec and S. GibsonInteractive rendering often requires the use of simplified shading algorithms with reduced illumination fidelity. Higher quality rendering algorithms are usually too slow for interactive use. The render cache is a technique to bridge this performance gap and allow ray-based renderers to be used in interactive contexts by providing automatic sample interpolation, frame-to-frame sample reuse, and prioritized sampling. In this paper we present several extensions to the original render cache including predictive sampling, reorganized computation for better memory coherence, an additional interpolation filter to handle sparser data, and SIMD acceleration. These optimizations allow the render cache to scale to larger resolutions, reduce its visual artifacts, and provide better handling of low sample rates. We also provide a downloadable binary to allow researchers to evaluate and use the render cache.Item Interactive Global Illumination Using Selective Photon Tracing(The Eurographics Association, 2002) Dmitriev, Kirill; Brabec, Stefan; Myszkowski, Karol; Seidel, Hans-Peter; P. Debevec and S. GibsonWe present a method for interactive global illumination computation which is embedded in the framework of Quasi-Monte Carlo photon tracing and density estimation techniques. The method exploits temporal coherence of illumination by tracing photons selectively to the scene regions that require illumination update. Such regions are identified with a high probability by a small number of the pilot photons. Based on the pilot photons which require updating, the remaining photons with similar paths in the scene can be found immediately. This becomes possible due to the periodicity property inherent to the multi-dimensional Halton sequence, which is used to generate photons. If invalid photons cannot all be updated during a single frame, frames are progressively refined in subsequent cycles. The order in which the photons are updated is decided by inexpensive energy- and perception-based criteria whose goal is to minimize the perceivability of outdated illumination. The method buckets all photons on-the-fly in mesh elements and does not require any data structures in the temporal domain, which makes it suitable for interactive rendering of complex scenes. Since mesh-based reconstruction of lighting patterns with high spatial frequencies is inefficient, we use a hybrid approach in which direct illumination and resulting shadows are rendered using graphics hardware.Item A Real-Time Distributed Light Field Camera(The Eurographics Association, 2002) Yang, Jason C.; Everett, Matthew; Buehler, Chris; McMillan, Leonard; P. Debevec and S. GibsonWe present the design and implementation of a real-time, distributed light field camera. Our system allows multiple viewers to navigate virtual cameras in a dynamically changing light field that is captured in real-time. Our light field camera consists of 64 commodity video cameras that are connected to off-the-shelf computers. We employ a distributed rendering algorithm that allows us to overcome the data bandwidth problems inherent in dynamic light fields. Our algorithm works by selectively transmitting only those portions of the video streams that contribute to the desired virtual views. This technique not only reduces the total bandwidth, but it also allows us to scale the number of cameras in our system without increasing network bandwidth. We demonstrate our system with a number of examples.Item Fast, Arbitrary BRDF Shading for Low-Frequency Lighting Using Spherical Harmonics(The Eurographics Association, 2002) Kautz, Jan; Sloan, Peter-Pike; Snyder, John; P. Debevec and S. GibsonReal-time shading using general (e.g., anisotropic) BRDFs has so far been limited to a few point or directional light sources. We extend such shading to smooth, area lighting using a low-order spherical harmonic basis for the lighting environment. We represent the 4D product function of BRDF times the cosine factor (dot product of the incident lighting and surface normal vectors) as a 2D table of spherical harmonic coefficients. Each table entry represents, for a single view direction, the integral of this product function times lighting on the hemisphere expressed in spherical harmonics. This reduces the shading integral to a simple dot product of 25 component vectors, easily evaluatable on PC graphics hardware. Non-trivial BRDF models require rotating the lighting coefficients to a local frame at each point on an object, currently forming the computational bottleneck. Real-time results can be achieved by fixing the view to allow dynamic lighting or vice versa. We also generalize a previous method for precomputed radiance transfer to handle general BRDF shading. This provides shadows and interreflections that respond in real-time to lighting changes on a preprocessed object of arbitrary material (BRDF) type.Item Interactive Global Illumination using Fast Ray Tracing(The Eurographics Association, 2002) Wald, Ingo; Kollig, Thomas; Benthin, Carsten; Keller, Alexander; Slusallek, Philipp; P. Debevec and S. GibsonRasterization hardware provides interactive frame rates for rendering dynamic scenes, but lacks the ability of ray tracing required for efficient global illumination simulation. Existing ray tracing based methods yield high quality renderings but are far too slow for interactive use. We present a new parallel global illumination algorithm that perfectly scales, has minimal preprocessing and communication overhead, applies highly efficient sampling techniques based on randomized quasi-Monte Carlo integration, and benefits from a fast parallel ray tracing implementation by shooting coherent groups of rays. Thus a performance is achieved that allows for applying arbitrary changes to the scene, while simulating global illumination including shadows from area light sources, indirect illumination, specular effects, and caustics at interactive frame rates. Ceasing interaction rapidly provides high quality renderings.Item Curve Analogies(The Eurographics Association, 2002) Hertzmann, Aaron; Oliver, Nuria; Curless, Brian; Seitz, Steven M.; P. Debevec and S. GibsonThis paper describes a method for learning statistical models of 2D curves, and shows how these models can be used to design line art rendering styles by example. A user can create a new style by providing an example of the style, e.g. by sketching a curve in a drawing program. Our method can then synthesize random new curves in this style, and modify existing curves to have the same style as the example. This method can incorporate position constraints on the resulting curves.Item Textured Depth Meshes for Real-Time Rendering of Arbitrary Scenes(The Eurographics Association, 2002) Jeschke, Stefan; Wimmer, Michael; P. Debevec and S. GibsonThis paper presents a new approach to generate textured depth meshes (TDMs), an impostor-based scene representation that can be used to accelerate the rendering of static polygonal models. The TDMs are precalculated for a fixed viewing region (view cell). The approach relies on a layered rendering of the scene to produce a voxel-based representation. Secondary, a highly complex polygon mesh is constructed that covers all the voxels. Afterwards, this mesh is simplified using a special error metric to ensure that all voxels stay covered. Finally, the remaining polygons are resampled using the voxel representation to obtain their textures. The contribution of our approach is manifold: first, it can handle polygonal models without any knowledge about their structure. Second, only scene parts that may become visible from within the view cell are represented, thereby cutting down on impostor complexity and storage costs. Third, an error metric guarantees that the impostors are practically indistinguishable compared to the original model (i.e. no rubber-sheet effects or holes appear as in most previous approaches). Furthermore, current graphics hardware is exploited for the construction and use of the impostors.Item Fast Primitive Distribution for Illustration(The Eurographics Association, 2002) Secord, Adrian; Heidrich, Wolfgang; Streit, Lisa; P. Debevec and S. GibsonIn this paper we present a high-quality, image-space approach to illustration that preserves continuous tone by probabilistically distributing primitives while maintaining interactive rates. Our method allows for frame-to-frame coherence by matching movements of primitives with changes in the input image. It can be used to create a variety of drawing styles by varying the primitive type or direction. We show that our approach is able to both preserve tone and (depending on the drawing style) high-frequency detail. Finally, while our algorithm requires only an image as input, additional 3D information enables the creation of a larger variety of drawing styles.Item Real-Time Halftoning: A Primitive For Non-Photorealistic Shading(The Eurographics Association, 2002) Freudenberg, Bert; Masuch, Maic; Strothotte, Thomas; P. Debevec and S. GibsonWe introduce halftoning as a general primitive for real-time non-photorealistic shading. It is capable of producing a variety of rendering styles, ranging from engraving with lighting-dependent line width to pen-and-ink style drawings using prioritized stroke textures. Since monitor resolution is limited we employ a smooth threshold function that provides stroke antialiasing. By applying the halftone screen in texture space and evaluating the threshold function for each pixel we can influence the shading on a pixel-by-pixel basis. This enables many effects to be used, including indication mapping and individual stroke lighting. Our real-time halftoning method is a drop-in replacement for conventional multitexturing and runs on commodity hardware. Thus, it is easy to integrate in existing applications, as we demonstrate with an artistically rendered level in a game engine.Item Video Flashlights - Real Time Rendering of Multiple Videos for Immersive Model Visualization(The Eurographics Association, 2002) Sawhney, H. S.; Arpa, A.; Kumar, R.; Samarasekera, S.; Aggarwal, M.; Hsu, S.; Nister, D.; Hanna, K.; P. Debevec and S. GibsonVideos and 3D models have traditionally existed in separate worlds and as distinct representations. Although texture maps for 3D models have been traditionally derived from multiple still images, real-time mapping of live videos as textures on 3D models has not been attempted. This paper presents a system for rendering multiple live videos in real-time over a 3D model as a novel and demonstrative application of the power of commodity graphics hardware. The system, metaphorically called the Video Flashlight system, "illuminates" a static 3D model with live video textures from static and moving cameras in the same way as a flashlight (torch) illuminates an environment. The Video Flashlight system is also an augmented reality solution for security and monitoring systems that deploy numerous cameras to monitor a large scale campus or an urban site. Current video monitoring systems are highly limited in providing global awareness since they typically display numerous camera videos on a grid of 2D displays. In contrast, the Video Flashlight system exploits the real-time rendering capabilities of current graphics hardware and renders live videos from various parts of an environment co-registered with the model. The user gets a global view of the model and is also able to visualize the dynamic videos simultaneously in the context of the model. In particular, the location of pixels and objects seen in the videos are precisely overlaid on the model while the user navigates through the model. The paper presents an overview of the system, details of the real-time rendering and demonstrates the efficacy of the augmented reality application.Item GigaWalk: Interactive Walkthrough of Complex Environments(The Eurographics Association, 2002) III, William V. Baxter; Sud, Avneesh; Govindaraju, Naga K.; Manocha, Dinesh; P. Debevec and S. GibsonWe present a new parallel algorithm and a system, GigaWalk, for interactive walkthrough of complex, gigabytesized environments. Our approach combines occlusion culling and levels-of-detail and uses two graphics pipelines with one or more processors. GigaWalk uses a unified scene graph representation for multiple acceleration techniques, and performs spatial clustering of geometry, conservative occlusion culling, and load-balancing between graphics pipelines and processors. GigaWalk has been used to render CAD environments composed of tens of millions of polygons at interactive rates on systems consisting of two graphics pipelines. Overall, our system s combination of levels-of-detail and occlusion culling techniques results in significant improvements in frame-rate over view-frustum culling or either single technique alone.Item Time Dependent Photon Mapping(The Eurographics Association, 2002) Cammarano, Mike; Jensen, Henrik Wann; P. Debevec and S. GibsonThe photon map technique for global illumination does not specifically address animated scenes. In particular, prior work has not considered the problem of temporal sampling (motion blur) while using the photon map. In this paper we examine several approaches for simulating motion blur with the photon map. In particular we show that a distribution of photons in time combined with the standard photon map radiance estimate is incorrect, and we introduce a simple generalization that correctly handles photons distributed in both time and space. Our results demonstrate that this time dependent photon map extension allows fast and correct estimates of motion-blurred illumination including motion-blurred caustics.Item Towards Real-Time Texture Synthesis with the Jump Map(The Eurographics Association, 2002) Zelinka, Steve; Garland, Michael; P. Debevec and S. GibsonWhile texture synthesis has been well-studied in recent years, real-time techniques remain elusive. To help facilitate real-time texture synthesis, we divide the task of texture synthesis into two phases: a relatively slow analysis phase, and a real-time synthesis phase. Any particular texture need only be analyzed once, and then an unlimited amount of texture may be synthesized in real-time. Our analysis phase generates a jump map, which stores for each input pixel a set of matching input pixels (jumps). Texture synthesis proceeds in real-time as a random walk through the jump map. Each new pixel is synthesized by extending the patch of input texture from which one of its neighbours was copied. Occasionally, a jump is taken through the jump map to begin a new patch. Despite the method s extreme simplicity, its speed and output quality compares favourably with recent patch-based algorithms.Item Approximate Soft Shadows on Arbitrary Surfaces using PenumbraWedges(The Eurographics Association, 2002) Akenine-Möller, Tomas; Assarsson, Ulf; P. Debevec and S. GibsonShadow generation has been subject to serious investigation in computer graphics, and many clever algorithms have been suggested. However, previous algorithms cannot render high quality soft shadows onto arbitrary, animated objects in real time. Pursuing this goal, we present a new soft shadow algorithm that extends the standard shadow volume algorithm by replacing each shadow quadrilateral with a new primitive, called the penumbra wedge. For each silhouette edge as seen from the light source, a penumbra wedge is created that approximately models the penumbra volume that this edge gives rise to. Together the penumbra wedges can render images that often are remarkably close to more precisely rendered soft shadows. Furthermore, our new primitive is designed so that it can be rasterized efficiently. Many real-time algorithms can only use planes as shadow receivers, while ours can handle arbitrary shadow receivers. The proposed algorithm can be of great value to, e.g., 3D computer games, especially since it is highly likely that this algorithm can be implemented on programmable graphics hardware coming out within the next year, and because games often prefer perceptually convincing shadows.Item Spatio-Temporal View Interpolation(The Eurographics Association, 2002) Vedula, Sundar; Baker, Simon; Kanade, Takeo; P. Debevec and S. GibsonWe propose a fully automatic algorithm for view interpolation of a completely non-rigid dynamic event across both space and time. The algorithm operates by combining images captured across space to compute voxel models of the scene shape at each time instant, and images captured across time to compute the "scene flow" between the voxel models. The scene-flow is the non-rigid 3D motion of every point in the scene. To interpolate in time, the voxel models are "flowed" using an appropriate multiple of the scene flow and a smooth surface fit to the result. The novel image is then computed by ray-casting to the surface at the intermediate time instant, following the scene flow to the neighboring time instants, projecting into the input images at those times, and finally blending the results. We use our algorithm to create re-timed slow-motion fly-by movies of dynamic real-world events.Item A Tone Mapping Algorithm for High Contrast Images(The Eurographics Association, 2002) Ashikhmin, Michael; P. Debevec and S. GibsonA new method is presented that takes as an input a high dynamic range image and maps it into a limited range of luminance values reproducible by a display device. There is significant evidence that a similar operation is performed by early stages of human visual system (HVS). Our approach follows functionality of HVS without attempting to construct its sophisticated model. The operation is performed in three steps. First, we estimate local adaptation luminance at each point in the image. Then, a simple function is applied to these values to compress them into the required display range. Since important image details can be lost during this process, we then re-introduce details in the final pass over the image.Item Appearance based object modeling using texture database: Acquisition, compression and rendering(The Eurographics Association, 2002) Furukawa, R.; Kawasaki, H.; Ikeuchi, K.; Sakauchi, M.; P. Debevec and S. GibsonImage-based object modeling can be used to compose photorealistic images of modeled objects for various rendering conditions, such as viewpoint, light directions, etc. However, it is challenging to acquire the large number of object images required for all combinations of capturing parameters and to then handle the resulting huge data sets for the model. This paper presents a novel modeling method for acquiring and preserving appearances of objects. Using a specialized capturing platform, we first acquire objects geometrical information and their complete 4D indexed texture sets, or bi-directional texture functions (BTF) in a highly automated manner. Then we compress the acquired texture database using tensor product expansion. The compressed texture database facilitates rendering objects with arbitrary viewpoints, illumination, and deformation.Item Accelerating Path Tracing by Re-Using Paths(The Eurographics Association, 2002) Bekaert, Philippe; Sbert, Mateu; Halton, John; P. Debevec and S. GibsonThis paper describes a new acceleration technique for rendering algorithms like path tracing, that use so called gathering random walks. Usually in path tracing, each traced path is used in order to compute a contribution to only a single point on the virtual screen. We propose to combine paths traced through nearby screen points in such a way that each path contributes to multiple screen points in a provably good way. Our approach is unbiased and is not restricted to diffuse light scattering. It complements previous image noise reduction techniques for Monte Carlo ray tracing. We observe speed-ups in the computation of indirect illumination of one order of magnitude.Item Local Illumination Environments for Direct Lighting Acceleration(The Eurographics Association, 2002) Fernandez, Sebastian; Bala, Kavita; Greenberg, Donald P.; P. Debevec and S. GibsonComputing high-quality direct illumination in scenes with many lights is an open area of research. This paper presents a world-space caching mechanism called local illumination environments that enables interactive direct illumination in complex scenes on a cluster of off-the-shelf PCs. A local illumination environment (LIE) caches geometric and radiometric information related to direct illumination. A LIE is associated with every octree cell constructed over the scene. Each LIE stores a set of visible lights, with associated occluders (if they exist). LIEs are effective at accelerating direct illumination because they both eliminate shadow rays for fully visible and fully occluded regions of the scene, and decrease the cost of shadow rays in other regions. Shadow ray computation for the partially occluded regions is accelerated using the cached potential occluders. One important implication of storing occluders is that rendering is accelerated while producing accurate hard and soft shadows. This paper also describes a simple perceptual metric based on Weber s law that further improves the effectiveness of LIEs in the fully visible and partially occluded regions. LIE construction is view-driven, continuously refined, and asynchronous with the shading process. In complex scenes of hundreds of thousands of polygons with up to a hundred lights, the LIEs improve rendering performance by 10x to 30x over a traditional ray tracer.Item Hardware-Accelerated Point-Based Rendering of Complex Scenes(The Eurographics Association, 2002) Coconu, Liviu; Hege, Hans-Christian; P. Debevec and S. GibsonHigh quality point rendering methods have been developed in the last years. A common drawback of these approaches is the lack of hardware support. We propose a novel point rendering technique that yields good image quality while fully making use of hardware acceleration. Previous research revealed various advantages and drawbacks of point rendering over traditional rendering. Thus, a guideline in our algorithm design has been to allow both primitive types simultaneously and dynamically choose the best suited for rendering. An octree-based spatial representation, containing both triangles and sampled points, is used for level-of-detail and visibility calculations. Points in each block are stored in a generalized layered depth image. McMillan s algorithm is extended and hierarchically applied in the octree to warp overlapping Gaussian fuzzy splats in occlusion-compatible order and hence z-buffer tests are avoided. We show how to use off-the-shelf hardware to draw elliptical Gaussian splats oriented according to normals and to perform texture filtering. The result is a hybrid polygon-point system with increased efficiency compared to previous approaches.