EGWR02: 13th Eurographics Workshop on Rendering

Permanent URI for this collection


Local Illumination Environments for Direct Lighting Acceleration

Fernandez, Sebastian
Bala, Kavita
Greenberg, Donald P.

Interactive Global Illumination using Fast Ray Tracing

Wald, Ingo
Kollig, Thomas
Benthin, Carsten
Keller, Alexander
Slusallek, Philipp

Interactive Global Illumination Using Selective Photon Tracing

Dmitriev, Kirill
Brabec, Stefan
Myszkowski, Karol
Seidel, Hans-Peter

Enhancing and Optimizing the Render Cache

Walter, Bruce
Drettakis, George
Greenberg, Donald P.

Hardware-Accelerated Point-Based Rendering of Complex Scenes

Coconu, Liviu
Hege, Hans-Christian

Efficient High Quality Rendering of Point Sampled Geometry

Botsch, Mario
Wiratanaya, Andreas
Kobbelt, Leif

Spatio-Temporal View Interpolation

Vedula, Sundar
Baker, Simon
Kanade, Takeo

Towards Real-Time Texture Synthesis with the Jump Map

Zelinka, Steve
Garland, Michael

Synthesizing Bark

Lefebvre, Sylvain
Neyret, Fabrice

Signal-Specialized Parametrization

Sander, Pedro V.
Gortler, Steven J.
Snyder, John
Hoppe, Hugues

A Real-Time Distributed Light Field Camera

Yang, Jason C.
Everett, Matthew
Buehler, Chris
McMillan, Leonard

Time Dependent Photon Mapping

Cammarano, Mike
Jensen, Henrik Wann

Picture Perfect RGB Rendering Using Spectral Prefiltering and Sharp Color Primaries

Ward, Greg
Eydelberg-Vileshin, Elena

Accelerating Path Tracing by Re-Using Paths

Bekaert, Philippe
Sbert, Mateu
Halton, John

Video Flashlights - Real Time Rendering of Multiple Videos for Immersive Model Visualization

Sawhney, H. S.
Arpa, A.
Kumar, R.
Samarasekera, S.
Aggarwal, M.
Hsu, S.
Nister, D.
Hanna, K.

A Tone Mapping Algorithm for High Contrast Images

Ashikhmin, Michael

Microfacet Billboarding

Yamazaki, Shuntaro
Sagawa, Ryusuke
Kawasaki, Hiroshi
Ikeuchi, Katsushi
Sakauchi, Masao

Textured Depth Meshes for Real-Time Rendering of Arbitrary Scenes

Jeschke, Stefan
Wimmer, Michael

Exact From-Region Visibility Culling

Nirenstein, S.
Blake, E.
Gain, J.

GigaWalk: Interactive Walkthrough of Complex Environments

III, William V. Baxter
Sud, Avneesh
Govindaraju, Naga K.
Manocha, Dinesh

Fast Primitive Distribution for Illustration

Secord, Adrian
Heidrich, Wolfgang
Streit, Lisa

Real-Time Halftoning: A Primitive For Non-Photorealistic Shading

Freudenberg, Bert
Masuch, Maic
Strothotte, Thomas

Curve Analogies

Hertzmann, Aaron
Oliver, Nuria
Curless, Brian
Seitz, Steven M.

Appearance based object modeling using texture database: Acquisition, compression and rendering

Furukawa, R.
Kawasaki, H.
Ikeuchi, K.
Sakauchi, M.

The Free-form Light Stage

Masselus, Vincent
Dutré, Philip
Anrys, Frederik

Acquisition and Rendering of Transparent and Refractive Objects

Matusik, Wojciech
Pfister, Hanspeter
Ziegler, Remo
Ngan, Addy
McMillan, Leonard

Image-based Environment Matting

Wexler, Yonatan
Fitzgibbon, Andrew. W.
Zisserman, Andrew.

Fast, Arbitrary BRDF Shading for Low-Frequency Lighting Using Spherical Harmonics

Kautz, Jan
Sloan, Peter-Pike
Snyder, John

Approximate Soft Shadows on Arbitrary Surfaces using PenumbraWedges

Akenine-Möller, Tomas
Assarsson, Ulf


Browse

Recent Submissions

Now showing 1 - 29 of 29
  • Item
    Local Illumination Environments for Direct Lighting Acceleration
    (The Eurographics Association, 2002) Fernandez, Sebastian; Bala, Kavita; Greenberg, Donald P.; P. Debevec and S. Gibson
    Computing high-quality direct illumination in scenes with many lights is an open area of research. This paper presents a world-space caching mechanism called local illumination environments that enables interactive direct illumination in complex scenes on a cluster of off-the-shelf PCs. A local illumination environment (LIE) caches geometric and radiometric information related to direct illumination. A LIE is associated with every octree cell constructed over the scene. Each LIE stores a set of visible lights, with associated occluders (if they exist). LIEs are effective at accelerating direct illumination because they both eliminate shadow rays for fully visible and fully occluded regions of the scene, and decrease the cost of shadow rays in other regions. Shadow ray computation for the partially occluded regions is accelerated using the cached potential occluders. One important implication of storing occluders is that rendering is accelerated while producing accurate hard and soft shadows. This paper also describes a simple perceptual metric based on Weber s law that further improves the effectiveness of LIEs in the fully visible and partially occluded regions. LIE construction is view-driven, continuously refined, and asynchronous with the shading process. In complex scenes of hundreds of thousands of polygons with up to a hundred lights, the LIEs improve rendering performance by 10x to 30x over a traditional ray tracer.
  • Item
    Interactive Global Illumination using Fast Ray Tracing
    (The Eurographics Association, 2002) Wald, Ingo; Kollig, Thomas; Benthin, Carsten; Keller, Alexander; Slusallek, Philipp; P. Debevec and S. Gibson
    Rasterization hardware provides interactive frame rates for rendering dynamic scenes, but lacks the ability of ray tracing required for efficient global illumination simulation. Existing ray tracing based methods yield high quality renderings but are far too slow for interactive use. We present a new parallel global illumination algorithm that perfectly scales, has minimal preprocessing and communication overhead, applies highly efficient sampling techniques based on randomized quasi-Monte Carlo integration, and benefits from a fast parallel ray tracing implementation by shooting coherent groups of rays. Thus a performance is achieved that allows for applying arbitrary changes to the scene, while simulating global illumination including shadows from area light sources, indirect illumination, specular effects, and caustics at interactive frame rates. Ceasing interaction rapidly provides high quality renderings.
  • Item
    Interactive Global Illumination Using Selective Photon Tracing
    (The Eurographics Association, 2002) Dmitriev, Kirill; Brabec, Stefan; Myszkowski, Karol; Seidel, Hans-Peter; P. Debevec and S. Gibson
    We present a method for interactive global illumination computation which is embedded in the framework of Quasi-Monte Carlo photon tracing and density estimation techniques. The method exploits temporal coherence of illumination by tracing photons selectively to the scene regions that require illumination update. Such regions are identified with a high probability by a small number of the pilot photons. Based on the pilot photons which require updating, the remaining photons with similar paths in the scene can be found immediately. This becomes possible due to the periodicity property inherent to the multi-dimensional Halton sequence, which is used to generate photons. If invalid photons cannot all be updated during a single frame, frames are progressively refined in subsequent cycles. The order in which the photons are updated is decided by inexpensive energy- and perception-based criteria whose goal is to minimize the perceivability of outdated illumination. The method buckets all photons on-the-fly in mesh elements and does not require any data structures in the temporal domain, which makes it suitable for interactive rendering of complex scenes. Since mesh-based reconstruction of lighting patterns with high spatial frequencies is inefficient, we use a hybrid approach in which direct illumination and resulting shadows are rendered using graphics hardware.
  • Item
    Enhancing and Optimizing the Render Cache
    (The Eurographics Association, 2002) Walter, Bruce; Drettakis, George; Greenberg, Donald P.; P. Debevec and S. Gibson
    Interactive rendering often requires the use of simplified shading algorithms with reduced illumination fidelity. Higher quality rendering algorithms are usually too slow for interactive use. The render cache is a technique to bridge this performance gap and allow ray-based renderers to be used in interactive contexts by providing automatic sample interpolation, frame-to-frame sample reuse, and prioritized sampling. In this paper we present several extensions to the original render cache including predictive sampling, reorganized computation for better memory coherence, an additional interpolation filter to handle sparser data, and SIMD acceleration. These optimizations allow the render cache to scale to larger resolutions, reduce its visual artifacts, and provide better handling of low sample rates. We also provide a downloadable binary to allow researchers to evaluate and use the render cache.
  • Item
    Hardware-Accelerated Point-Based Rendering of Complex Scenes
    (The Eurographics Association, 2002) Coconu, Liviu; Hege, Hans-Christian; P. Debevec and S. Gibson
    High quality point rendering methods have been developed in the last years. A common drawback of these approaches is the lack of hardware support. We propose a novel point rendering technique that yields good image quality while fully making use of hardware acceleration. Previous research revealed various advantages and drawbacks of point rendering over traditional rendering. Thus, a guideline in our algorithm design has been to allow both primitive types simultaneously and dynamically choose the best suited for rendering. An octree-based spatial representation, containing both triangles and sampled points, is used for level-of-detail and visibility calculations. Points in each block are stored in a generalized layered depth image. McMillan s algorithm is extended and hierarchically applied in the octree to warp overlapping Gaussian fuzzy splats in occlusion-compatible order and hence z-buffer tests are avoided. We show how to use off-the-shelf hardware to draw elliptical Gaussian splats oriented according to normals and to perform texture filtering. The result is a hybrid polygon-point system with increased efficiency compared to previous approaches.
  • Item
    Efficient High Quality Rendering of Point Sampled Geometry
    (The Eurographics Association, 2002) Botsch, Mario; Wiratanaya, Andreas; Kobbelt, Leif; P. Debevec and S. Gibson
    We propose a highly efficient hierarchical representation for point sampled geometry that automatically balances sampling density and point coordinate quantization. The representation is very compact with a memory consumption of far less than 2 bits per point position which does not depend on the quantization precision. We present an efficient rendering algorithm that exploits the hierarchical structure of the representation to perform fast 3D transformations and shading. The algorithm is extended to surface splatting which yields high quality anti-aliased and water tight surface renderings. Our pure software implementation renders up to 14 million Phong shaded and textured samples per second and about 4 million anti-aliased surface splats on a commodity PC. This is more than a factor 10 times faster than previous algorithms.
  • Item
    Spatio-Temporal View Interpolation
    (The Eurographics Association, 2002) Vedula, Sundar; Baker, Simon; Kanade, Takeo; P. Debevec and S. Gibson
    We propose a fully automatic algorithm for view interpolation of a completely non-rigid dynamic event across both space and time. The algorithm operates by combining images captured across space to compute voxel models of the scene shape at each time instant, and images captured across time to compute the "scene flow" between the voxel models. The scene-flow is the non-rigid 3D motion of every point in the scene. To interpolate in time, the voxel models are "flowed" using an appropriate multiple of the scene flow and a smooth surface fit to the result. The novel image is then computed by ray-casting to the surface at the intermediate time instant, following the scene flow to the neighboring time instants, projecting into the input images at those times, and finally blending the results. We use our algorithm to create re-timed slow-motion fly-by movies of dynamic real-world events.
  • Item
    Towards Real-Time Texture Synthesis with the Jump Map
    (The Eurographics Association, 2002) Zelinka, Steve; Garland, Michael; P. Debevec and S. Gibson
    While texture synthesis has been well-studied in recent years, real-time techniques remain elusive. To help facilitate real-time texture synthesis, we divide the task of texture synthesis into two phases: a relatively slow analysis phase, and a real-time synthesis phase. Any particular texture need only be analyzed once, and then an unlimited amount of texture may be synthesized in real-time. Our analysis phase generates a jump map, which stores for each input pixel a set of matching input pixels (jumps). Texture synthesis proceeds in real-time as a random walk through the jump map. Each new pixel is synthesized by extending the patch of input texture from which one of its neighbours was copied. Occasionally, a jump is taken through the jump map to begin a new patch. Despite the method s extreme simplicity, its speed and output quality compares favourably with recent patch-based algorithms.
  • Item
    Synthesizing Bark
    (The Eurographics Association, 2002) Lefebvre, Sylvain; Neyret, Fabrice; P. Debevec and S. Gibson
    Despite the high quality reached by today s CG tree generators, there exists no realistic model for generating the appearance of bark: simple texture maps are generally used, showing obvious flaws if the tree is not entirely painted by an artist. Beyond modeling the appearance of bark, difficulties lies in adapting the bark features to the age of each branch, ensuring continuity between adjacent parts of the tree, and possibly ensuring continuity through time. We propose a model of bark generation which produces either geometry or texture, and is dedicated to the widespread family of fracture-based bark. Given that the tree growth is mostly on its circumference, we consider circular strips of bark on which fractures can appear, propagate these fractures to the other strips, and enlarge them with time. Our semi-empirical model runs in interactive time, and allows automatic or influenced bark generation with parameters that are intuitive for the artist. Moreover we can simulate many different instances of the same bark family. In the paper, our generated bark is compared (favourably) to real bark.
  • Item
    Signal-Specialized Parametrization
    (The Eurographics Association, 2002) Sander, Pedro V.; Gortler, Steven J.; Snyder, John; Hoppe, Hugues; P. Debevec and S. Gibson
    To reduce memory requirements for texture mapping a model, we build a surface parametrization specialized to its signal (such as color or normal). Intuitively, we want to allocate more texture samples in regions with greater signal detail. Our approach is to minimize signal approximation error - the difference between the original surface signal and its reconstruction from the sampled texture. Specifically, our signal-stretch parametrization metric is derived from a Taylor expansion of signal error. For fast evaluation, this metric is pre-integrated over the surface as a metric tensor. We minimize this nonlinear metric using a novel coarse-tofine hierarchical solver, further accelerated with a fine-to-coarse propagation of the integrated metric tensor. Use of metric tensors permits anisotropic squashing of the parametrization along directions of low signal gradient. Texture area can often be reduced by a factor of 4 for a desired signal accuracy compared to nonspecialized parametrizations.
  • Item
    A Real-Time Distributed Light Field Camera
    (The Eurographics Association, 2002) Yang, Jason C.; Everett, Matthew; Buehler, Chris; McMillan, Leonard; P. Debevec and S. Gibson
    We present the design and implementation of a real-time, distributed light field camera. Our system allows multiple viewers to navigate virtual cameras in a dynamically changing light field that is captured in real-time. Our light field camera consists of 64 commodity video cameras that are connected to off-the-shelf computers. We employ a distributed rendering algorithm that allows us to overcome the data bandwidth problems inherent in dynamic light fields. Our algorithm works by selectively transmitting only those portions of the video streams that contribute to the desired virtual views. This technique not only reduces the total bandwidth, but it also allows us to scale the number of cameras in our system without increasing network bandwidth. We demonstrate our system with a number of examples.
  • Item
    Time Dependent Photon Mapping
    (The Eurographics Association, 2002) Cammarano, Mike; Jensen, Henrik Wann; P. Debevec and S. Gibson
    The photon map technique for global illumination does not specifically address animated scenes. In particular, prior work has not considered the problem of temporal sampling (motion blur) while using the photon map. In this paper we examine several approaches for simulating motion blur with the photon map. In particular we show that a distribution of photons in time combined with the standard photon map radiance estimate is incorrect, and we introduce a simple generalization that correctly handles photons distributed in both time and space. Our results demonstrate that this time dependent photon map extension allows fast and correct estimates of motion-blurred illumination including motion-blurred caustics.
  • Item
    Picture Perfect RGB Rendering Using Spectral Prefiltering and Sharp Color Primaries
    (The Eurographics Association, 2002) Ward, Greg; Eydelberg-Vileshin, Elena; P. Debevec and S. Gibson
    Accurate color rendering requires the consideration of many samples over the visible spectrum, and advanced rendering tools developed by the research community offer multispectral sampling towards this goal. However, for practical reasons including efficiency, white balance, and data demands, most commercial rendering packages still employ a naive RGB model in their lighting calculations. This results in colors that are often qualitatively different from the correct ones. In this paper, we demonstrate two independent and complementary techniques for improving RGB rendering accuracy without impacting calculation time: spectral prefiltering and color space selection. Spectral prefiltering is an obvious but overlooked method of preparing input colors for a conventional RGB rendering calculation, which achieves exact results for the direct component, and very accurate results for the interreflected component when compared with full-spectral rendering. In an empirical error analysis of our method, we show how the choice of rendering color space also affects final image accuracy, independent of prefiltering. Specifically, we demonstrate the merits of a particular transform that has emerged from the color research community as the best performer in computing white point adaptation under changing illuminants: the Sharp RGB space.
  • Item
    Accelerating Path Tracing by Re-Using Paths
    (The Eurographics Association, 2002) Bekaert, Philippe; Sbert, Mateu; Halton, John; P. Debevec and S. Gibson
    This paper describes a new acceleration technique for rendering algorithms like path tracing, that use so called gathering random walks. Usually in path tracing, each traced path is used in order to compute a contribution to only a single point on the virtual screen. We propose to combine paths traced through nearby screen points in such a way that each path contributes to multiple screen points in a provably good way. Our approach is unbiased and is not restricted to diffuse light scattering. It complements previous image noise reduction techniques for Monte Carlo ray tracing. We observe speed-ups in the computation of indirect illumination of one order of magnitude.
  • Item
    Video Flashlights - Real Time Rendering of Multiple Videos for Immersive Model Visualization
    (The Eurographics Association, 2002) Sawhney, H. S.; Arpa, A.; Kumar, R.; Samarasekera, S.; Aggarwal, M.; Hsu, S.; Nister, D.; Hanna, K.; P. Debevec and S. Gibson
    Videos and 3D models have traditionally existed in separate worlds and as distinct representations. Although texture maps for 3D models have been traditionally derived from multiple still images, real-time mapping of live videos as textures on 3D models has not been attempted. This paper presents a system for rendering multiple live videos in real-time over a 3D model as a novel and demonstrative application of the power of commodity graphics hardware. The system, metaphorically called the Video Flashlight system, "illuminates" a static 3D model with live video textures from static and moving cameras in the same way as a flashlight (torch) illuminates an environment. The Video Flashlight system is also an augmented reality solution for security and monitoring systems that deploy numerous cameras to monitor a large scale campus or an urban site. Current video monitoring systems are highly limited in providing global awareness since they typically display numerous camera videos on a grid of 2D displays. In contrast, the Video Flashlight system exploits the real-time rendering capabilities of current graphics hardware and renders live videos from various parts of an environment co-registered with the model. The user gets a global view of the model and is also able to visualize the dynamic videos simultaneously in the context of the model. In particular, the location of pixels and objects seen in the videos are precisely overlaid on the model while the user navigates through the model. The paper presents an overview of the system, details of the real-time rendering and demonstrates the efficacy of the augmented reality application.
  • Item
    A Tone Mapping Algorithm for High Contrast Images
    (The Eurographics Association, 2002) Ashikhmin, Michael; P. Debevec and S. Gibson
    A new method is presented that takes as an input a high dynamic range image and maps it into a limited range of luminance values reproducible by a display device. There is significant evidence that a similar operation is performed by early stages of human visual system (HVS). Our approach follows functionality of HVS without attempting to construct its sophisticated model. The operation is performed in three steps. First, we estimate local adaptation luminance at each point in the image. Then, a simple function is applied to these values to compress them into the required display range. Since important image details can be lost during this process, we then re-introduce details in the final pass over the image.
  • Item
    Microfacet Billboarding
    (The Eurographics Association, 2002) Yamazaki, Shuntaro; Sagawa, Ryusuke; Kawasaki, Hiroshi; Ikeuchi, Katsushi; Sakauchi, Masao; P. Debevec and S. Gibson
    Rendering of intricately shaped objects that are soft or cluttered is difficult because we cannot accurately acquire their complete geometry. Since their geometry varies drastically, modeling them using fixed facets can lead to severe artifacts when viewed from singular directions. In this paper, we propose a novel modeling method, "microfacet billboarding", which uses view-dependent "microfacets" with view-dependent textures. The facets discretely approximate the geometry of the object and are aligned perpendicular to the viewing direction. The texture of each facet is selected from the most suitable texture images according to the viewpoint. Microfacet billboarding can render intricate geometry from various viewpoints. We first describe the basic algorithm of microfacet billboarding. Also, we predict artifacts generated due to the use of discrete facets and we analyze the necessary sampling interval of the geometry and texture for regarding the artifacts as negligible. In addition to the modeling method, we have implemented a real-time renderer by a hardware-accelerated technique. To evaluate the efficiency of our method, we compared it with traditional texture mapping to a mesh model, and showed that our method has a great advantage over the former in rendering intricately shaped objects.
  • Item
    Textured Depth Meshes for Real-Time Rendering of Arbitrary Scenes
    (The Eurographics Association, 2002) Jeschke, Stefan; Wimmer, Michael; P. Debevec and S. Gibson
    This paper presents a new approach to generate textured depth meshes (TDMs), an impostor-based scene representation that can be used to accelerate the rendering of static polygonal models. The TDMs are precalculated for a fixed viewing region (view cell). The approach relies on a layered rendering of the scene to produce a voxel-based representation. Secondary, a highly complex polygon mesh is constructed that covers all the voxels. Afterwards, this mesh is simplified using a special error metric to ensure that all voxels stay covered. Finally, the remaining polygons are resampled using the voxel representation to obtain their textures. The contribution of our approach is manifold: first, it can handle polygonal models without any knowledge about their structure. Second, only scene parts that may become visible from within the view cell are represented, thereby cutting down on impostor complexity and storage costs. Third, an error metric guarantees that the impostors are practically indistinguishable compared to the original model (i.e. no rubber-sheet effects or holes appear as in most previous approaches). Furthermore, current graphics hardware is exploited for the construction and use of the impostors.
  • Item
    Exact From-Region Visibility Culling
    (The Eurographics Association, 2002) Nirenstein, S.; Blake, E.; Gain, J.; P. Debevec and S. Gibson
    To pre-process a scene for the purpose of visibility culling during walkthroughs it is necessary to solve visibility from all the elements of a finite partition of viewpoint space. Many conservative and approximate solutions have been developed that solve for visibility rapidly. The idealised exact solution for general 3D scenes has often been regarded as computationally intractable. Our exact algorithm for finding the visible polygons in a scene from a region is a computationally tractable pre-process that can handle scenes of the order of millions of polygons. The essence of our idea is to represent 3-D polygons and the stabbing lines connecting them in a 5-D Euclidean space derived from Plücker space and then to perform geometric subtractions of occluded lines from the set of potential stabbing lines.We have built a query architecture around this query algorithm that allows for its practical application to large scenes. We have tested the algorithm on two different types of scene: despite a large constant computational overhead, it is highly scalable, with a time dependency close to linear in the output produced.
  • Item
    GigaWalk: Interactive Walkthrough of Complex Environments
    (The Eurographics Association, 2002) III, William V. Baxter; Sud, Avneesh; Govindaraju, Naga K.; Manocha, Dinesh; P. Debevec and S. Gibson
    We present a new parallel algorithm and a system, GigaWalk, for interactive walkthrough of complex, gigabytesized environments. Our approach combines occlusion culling and levels-of-detail and uses two graphics pipelines with one or more processors. GigaWalk uses a unified scene graph representation for multiple acceleration techniques, and performs spatial clustering of geometry, conservative occlusion culling, and load-balancing between graphics pipelines and processors. GigaWalk has been used to render CAD environments composed of tens of millions of polygons at interactive rates on systems consisting of two graphics pipelines. Overall, our system s combination of levels-of-detail and occlusion culling techniques results in significant improvements in frame-rate over view-frustum culling or either single technique alone.
  • Item
    Fast Primitive Distribution for Illustration
    (The Eurographics Association, 2002) Secord, Adrian; Heidrich, Wolfgang; Streit, Lisa; P. Debevec and S. Gibson
    In this paper we present a high-quality, image-space approach to illustration that preserves continuous tone by probabilistically distributing primitives while maintaining interactive rates. Our method allows for frame-to-frame coherence by matching movements of primitives with changes in the input image. It can be used to create a variety of drawing styles by varying the primitive type or direction. We show that our approach is able to both preserve tone and (depending on the drawing style) high-frequency detail. Finally, while our algorithm requires only an image as input, additional 3D information enables the creation of a larger variety of drawing styles.
  • Item
    Real-Time Halftoning: A Primitive For Non-Photorealistic Shading
    (The Eurographics Association, 2002) Freudenberg, Bert; Masuch, Maic; Strothotte, Thomas; P. Debevec and S. Gibson
    We introduce halftoning as a general primitive for real-time non-photorealistic shading. It is capable of producing a variety of rendering styles, ranging from engraving with lighting-dependent line width to pen-and-ink style drawings using prioritized stroke textures. Since monitor resolution is limited we employ a smooth threshold function that provides stroke antialiasing. By applying the halftone screen in texture space and evaluating the threshold function for each pixel we can influence the shading on a pixel-by-pixel basis. This enables many effects to be used, including indication mapping and individual stroke lighting. Our real-time halftoning method is a drop-in replacement for conventional multitexturing and runs on commodity hardware. Thus, it is easy to integrate in existing applications, as we demonstrate with an artistically rendered level in a game engine.
  • Item
    Curve Analogies
    (The Eurographics Association, 2002) Hertzmann, Aaron; Oliver, Nuria; Curless, Brian; Seitz, Steven M.; P. Debevec and S. Gibson
    This paper describes a method for learning statistical models of 2D curves, and shows how these models can be used to design line art rendering styles by example. A user can create a new style by providing an example of the style, e.g. by sketching a curve in a drawing program. Our method can then synthesize random new curves in this style, and modify existing curves to have the same style as the example. This method can incorporate position constraints on the resulting curves.
  • Item
    Appearance based object modeling using texture database: Acquisition, compression and rendering
    (The Eurographics Association, 2002) Furukawa, R.; Kawasaki, H.; Ikeuchi, K.; Sakauchi, M.; P. Debevec and S. Gibson
    Image-based object modeling can be used to compose photorealistic images of modeled objects for various rendering conditions, such as viewpoint, light directions, etc. However, it is challenging to acquire the large number of object images required for all combinations of capturing parameters and to then handle the resulting huge data sets for the model. This paper presents a novel modeling method for acquiring and preserving appearances of objects. Using a specialized capturing platform, we first acquire objects geometrical information and their complete 4D indexed texture sets, or bi-directional texture functions (BTF) in a highly automated manner. Then we compress the acquired texture database using tensor product expansion. The compressed texture database facilitates rendering objects with arbitrary viewpoints, illumination, and deformation.
  • Item
    The Free-form Light Stage
    (The Eurographics Association, 2002) Masselus, Vincent; Dutré, Philip; Anrys, Frederik; P. Debevec and S. Gibson
    We present the Free-form Light Stage, a system that captures the reflectance field of an object using a free-moving, hand-held light source. By photographing the object under different illumination conditions, we are able to render the object under any lighting condition, using a linear combination of basis images. During the data acquisition, the light source is moved freely around the object and hence, for each picture, the illuminant direction is unknown. This direction is estimated automatically from the images. Although the reflectance field is sampled non-uniformly, appropriate weighting coefficients are calculated. Using this system, we are able to relight objects in a convincing and realistic way.
  • Item
    Acquisition and Rendering of Transparent and Refractive Objects
    (The Eurographics Association, 2002) Matusik, Wojciech; Pfister, Hanspeter; Ziegler, Remo; Ngan, Addy; McMillan, Leonard; P. Debevec and S. Gibson
    This paper introduces a new image-based approach to capturing and modeling highly specular, transparent, or translucent objects. We have built a system for automatically acquiring high quality graphical models of objects that are extremely difficult to scan with traditional 3D scanners. The system consists of turntables, a set of cameras and lights, and monitors to project colored backdrops. We use multi-background matting techniques to acquire alpha and environment mattes of the object from multiple viewpoints. Using the alpha mattes we reconstruct an approximate 3D shape of the object. We use the environment mattes to compute a high-resolution surface reflectance field. We also acquire a low-resolution surface reflectance field using the overhead array of lights. Both surface reflectance fields are used to relight the objects and to place them into arbitrary environments. Our system is the first to acquire and render transparent and translucent 3D objects, such as a glass of beer, from arbitrary viewpoints under novel illumination.
  • Item
    Image-based Environment Matting
    (The Eurographics Association, 2002) Wexler, Yonatan; Fitzgibbon, Andrew. W.; Zisserman, Andrew.; P. Debevec and S. Gibson
    Environment matting is a powerful technique for modeling the complex light-transport properties of real-world optically active elements: transparent, refractive and reflective objects. Recent research has shown how environment mattes can be computed for real objects under carefully controlled laboratory conditions. However, many objects for which environment mattes are necessary for accurate rendering cannot be placed into a calibrated lighting environment. We show in this paper that analysis of the way in which optical elements distort the appearance of their backgrounds allows the construction of environment mattes in situ without the need for specialized calibration. Specifically, given multiple images of the same element over the same background, where the element and background have relative motion, it is shown that both the background and the optical element s light-transport path can be computed. We demonstrate the technique on two different examples. In the first case, the optical element s geometry is simple, and evaluation of the realism of the output is easy. In the second, previous techniques would be difficult to apply. We show that image-based environment matting yields a realistic solution. We discuss how the stability of the solution depends on the number of images used, and how to regularize the solution where only a small number of images are available
  • Item
    Fast, Arbitrary BRDF Shading for Low-Frequency Lighting Using Spherical Harmonics
    (The Eurographics Association, 2002) Kautz, Jan; Sloan, Peter-Pike; Snyder, John; P. Debevec and S. Gibson
    Real-time shading using general (e.g., anisotropic) BRDFs has so far been limited to a few point or directional light sources. We extend such shading to smooth, area lighting using a low-order spherical harmonic basis for the lighting environment. We represent the 4D product function of BRDF times the cosine factor (dot product of the incident lighting and surface normal vectors) as a 2D table of spherical harmonic coefficients. Each table entry represents, for a single view direction, the integral of this product function times lighting on the hemisphere expressed in spherical harmonics. This reduces the shading integral to a simple dot product of 25 component vectors, easily evaluatable on PC graphics hardware. Non-trivial BRDF models require rotating the lighting coefficients to a local frame at each point on an object, currently forming the computational bottleneck. Real-time results can be achieved by fixing the view to allow dynamic lighting or vice versa. We also generalize a previous method for precomputed radiance transfer to handle general BRDF shading. This provides shadows and interreflections that respond in real-time to lighting changes on a preprocessed object of arbitrary material (BRDF) type.
  • Item
    Approximate Soft Shadows on Arbitrary Surfaces using PenumbraWedges
    (The Eurographics Association, 2002) Akenine-Möller, Tomas; Assarsson, Ulf; P. Debevec and S. Gibson
    Shadow generation has been subject to serious investigation in computer graphics, and many clever algorithms have been suggested. However, previous algorithms cannot render high quality soft shadows onto arbitrary, animated objects in real time. Pursuing this goal, we present a new soft shadow algorithm that extends the standard shadow volume algorithm by replacing each shadow quadrilateral with a new primitive, called the penumbra wedge. For each silhouette edge as seen from the light source, a penumbra wedge is created that approximately models the penumbra volume that this edge gives rise to. Together the penumbra wedges can render images that often are remarkably close to more precisely rendered soft shadows. Furthermore, our new primitive is designed so that it can be rasterized efficiently. Many real-time algorithms can only use planes as shadow receivers, while ours can handle arbitrary shadow receivers. The proposed algorithm can be of great value to, e.g., 3D computer games, especially since it is highly likely that this algorithm can be implemented on programmable graphics hardware coming out within the next year, and because games often prefer perceptually convincing shadows.