EGSR04: 15th Eurographics Symposium on Rendering

Permanent URI for this collection


Estimating source spectra and spectral albedos from RGB data for rerendering

Koenderink, J. J.

Sketch Interpretation and Refinement Using Statistical Models

Simhon, Saul
Dudek, Gregory

Rendering Evolution at Industrial Light and Magic

Hery, Christophe

PointWorks: Abstraction and Rendering of Sparsely Scanned Outdoor Environments

Xu, Hui
Gossett, Nathan
Chen, Baoquan

Programmable Style for NPR Line Drawing

Grabli, Stéphane
Turquin, Emmanuel
Durand, Frédo
Sillion, François X.

Rendering Forest Scenes in Real-Time

Decaudin, Philippe
Neyret, Fabrice

An Interactive Out-of-Core Rendering Framework for Visualizing Massively Complex Models

Wald, Ingo
Dietrich, Andreas
Slusallek, Phlipp

A Framework for Multiperspective Rendering

Yu, J.
McMillan, L.

Real-time appearance preserving out-of-core rendering with shadows

Guthe, Michael
Borodin, Pavel
Balázs, Ákos
Klein, Reinhard

Image-Based Stereoscopic Painterly Rendering

Stavrakis, E.
Gelautz, M.

Rendering Procedural Terrain by Geometry Image Warping

Dachsbacher, Carsten
Stamminger, Marc

Realtime Caustics Using Distributed Photon Mapping

Günther, Johannes
Wald, Ingo
Slusallek, Philipp

Simulating Photon Mapping for Real-time Applications

Larsen, Bent Dalgaard
Christensen, Niels Jørgen

An Irradiance Atlas for Global Illumination in Complex Production Scenes

Christensen, Per H.
Batali, Dana

Anti-aliasing and Continuity with Trapezoidal Shadow Maps

Martin, Tobias
Tan, Tiow-Seng

Light Space Perspective Shadow Maps

Wimmer, Michael
Scherzer, Daniel
Purgathofer, Werner

A Self-Shadow Algorithm for Dynamic Hair using Density Clustering

Mertens, Tom
Kautz, Jan
Bekaert, Philippe
Reeth, Frank Van

A Lixel for every Pixel

Chong, Hamilton Y.
Gortler, Steven J.

Alias-Free Shadow Maps

Aila, Timo
Laine, Samuli

Hemispherical Rasterization for Self-Shadowing of Dynamic Objects

Kautz, Jan
Lehtinen, Jaakko
Aila, Timo

An Efficient Hybrid Shadow Rendering Algorithm

Chan, Eric
Durand, Fredo

A spectral-particle hybrid method for rendering falling snow

Langer, M. S.
Zhang, L.
Klein, A.W.
Bhatia, A.
Pereira, J.
Rekhi, D.

CC Shadow Volumes

Lloyd, D. Brandon
Wendt, Jeremy
Govindaraju, Naga K.
Manocha, Dinesh

Hardware Accelerated Visibility Preprocessing using Adaptive Sampling

Nirenstein, S.
Blake, E.

All-focused light field rendering

Kubota, Akira
Takahashi, Keita
Aizawa, Kiyoharu
Chen, Tsuhan

A Self-Reconfigurable Camera Array

Zhang, Cha
Chen, Tsuhan

Generalized Displacement Maps

Wang, Xi
Tong, Xin
Lin, Stephen
Hu, Shimin
Guo, Baining
Shum, Heung-Yeung

Bixels: Picture Samples with Sharp Embedded Boundaries

Tumblin, Jack
Choudhury, Prasun

Feature-Based Textures

Ramanarayanan, G.
Bala, K.
Walter, B.

Progressively-Refined Reflectance Functions from Natural Illumination

Matusik, Wojciech
Loper, Matthew
Pfister, Hanspeter

Combining Higher-Order Wavelets and Discontinuity Meshing: a Compact Representation for Radiosity

Holzschuch, N.
Alonso, L.

Smooth Reconstruction and Compact Representation of Reflectance Functions for Image-based Relighting

Masselus, Vincent
Peers, Pieter
Dutré, Philip
Willemsy, Yves D.

All-Frequency Precomputed Radiance Transfer for Glossy Objects

Liu, Xinguo
Sloan, Peter-Pike
Shum, Heung-Yeung
Snyder, John

Animatable Facial Reflectance Fields

Hawkins, Tim
Wenger, Andreas
Tchou, Chris
Gardner, Andrew
Göransson, Fredrik
Debevec, Paul

A Novel Hemispherical Basis for Accurate and Efficient Rendering

Gautron, Pascal
Krivanek, Jaroslav
Pattanaik, Sumanta
Bouatouch, Kadi

Spherical Harmonic Gradients for Mid-Range Illumination

Annen, Thomas
Kautz, Jan
Durand, Frédo
Seidel, Hans-Peter

Practical Rendering of Multiple Scattering Effects in Participating Media

Premoze, Simon
Ashikhmin, Michael
Tessendorf, Jerry
Ramamoorthi, Ravi
Nayar, Shree

Lattice-Boltzmann Lighting

Geist, Robert
Rasche, Karl
Westall, James
Schalkoff, Robert

All-Frequency Relighting of Non-Diffuse Objects using Separable BRDF Approximation

Wang, Rui
Tran, John
Luebke, David

Efficient Rendering of Atmospheric Phenomena

Riley, Kirk
Ebert, David S.
Kraus, Martin
Tessendorf, Jerry
Hansen, Charles

An Analytical Model for Skylight Polarisation

Wilkie, A.
Ulbricht, C.
Tobler, Robert F.
Zotti, G.
Purgathofer, W.


Browse

Recent Submissions

Now showing 1 - 41 of 41
  • Item
    Estimating source spectra and spectral albedos from RGB data for rerendering
    (The Eurographics Association, 2004) Koenderink, J. J.; Alexander Keller and Henrik Wann Jensen
    I consider the problem of estimating material properties (the spectral albedo) on the basis of -object colors- (at worst only RGB data say). I show how to obtain a priori likely estimates for the white point, the spectral composition of the source, and the spectral albedos of the objects in a scene. I also show how to construct the general solutions. These general solutions are so broad as to render them practically useless. There are good reasons to disregard the larger part of the solution space, because very general considerations suggest that the specific solutions constructed with the methods discussed here are very likely to yield sensible and useful results in practice. From a principled perspective it is desirable to be able to construct the full solution space though. Since the results are in the scene, rather than the image domain, they are suitable for rerendering purposes.
  • Item
    Sketch Interpretation and Refinement Using Statistical Models
    (The Eurographics Association, 2004) Simhon, Saul; Dudek, Gregory; Alexander Keller and Henrik Wann Jensen
    We present a system for generating 2D illustrations from hand drawn outlines consisting of only curve strokes. A user can draw a coarse sketch and the system would automatically augment the shape, thickness, color and surrounding texture of the curves making up the sketch. The styles for these refinements are learned from examples whose semantics have been pre-classified. There can be several styles applicable on a curve and the system automatically identifies which one to use and how to use it based on a curve's shape and its context in the illustration. Our approach is based on a Hierarchical Hidden Markov Model. We present a two level hierarchy in which the refinement process is applied at: the curve level and the scene level.
  • Item
    Rendering Evolution at Industrial Light and Magic
    (The Eurographics Association, 2004) Hery, Christophe; Alexander Keller and Henrik Wann Jensen
    From Jurassic Park to Van Helsing, the rendering technologies at ILM have evolved over the last 10 years, both to satisfy the high demands of our clients and also those of the general public. State-of-the-art rendering techniques such as volume rendering, ambient occlusion, image-based rendering, sub-surface scattering and global illumination are now in common use. This summary will give a brief history of how rendering schemes came to be deployed (and fairly often pioneered) at our facility and the challenges they brought with them.
  • Item
    PointWorks: Abstraction and Rendering of Sparsely Scanned Outdoor Environments
    (The Eurographics Association, 2004) Xu, Hui; Gossett, Nathan; Chen, Baoquan; Alexander Keller and Henrik Wann Jensen
    This paper describes a system, dubbed PointWorks, for rendering three-dimensionally digitized outdoor environments in non-photorealistic rendering styles. The challenge in rendering scanned outdoor environments is accommodating their inaccuracy, incompleteness, and large size to deliver a smooth animation without suggesting the underlying data deficiency. The key method discussed in this paper is employing artistic drawing techniques to illustrate features of varying importance and accuracy. We employ a point-based representation of the scanned environment and operate directly on point-based models for abstraction and rendering. We develop a framework for producing mainly two artistic styles: painterly and profile lines. Strategies have also been employed to leverage modern graphics hardware for achieving interactive rendering of large scenes.
  • Item
    Programmable Style for NPR Line Drawing
    (The Eurographics Association, 2004) Grabli, Stéphane; Turquin, Emmanuel; Durand, Frédo; Sillion, François X.; Alexander Keller and Henrik Wann Jensen
    This paper introduces a programmable approach to non-photorealistic line drawing from 3D models, inspired by programmable shaders in traditional rendering. We propose a new image creation model where all operations are controlled through user-defined procedures. A view map describing all relevant support lines in the drawing and their topological arrangement is first created from the 3D model; a number of style modules operate on this map, by procedurally selecting, chaining or splitting lines, before creating strokes and assigning drawing attributes. The resulting drawing system permits flexible control of all elements of drawing style: first, different style modules can be applied to different types of lines in a view; second, the topology and geometry of strokes are entirely controlled from the programmable modules; and third, stroke attributes are assigned procedurally and can be correlated at will with various scene or view properties. Finally, we propose new density control strategies where strokes can be adapted or omitted to avoid visual clutter. We illustrate the components of our system and show how style modules successfully capture stylized visual characteristics that can be applied across a wide range of models.
  • Item
    Rendering Forest Scenes in Real-Time
    (The Eurographics Association, 2004) Decaudin, Philippe; Neyret, Fabrice; Alexander Keller and Henrik Wann Jensen
    Forests are crucial for scene realism in applications such as light simulators. This paper proposes a new representation allowing for the real-time rendering of realistic forests covering an arbitrary terrain. It lets us produce dense forests corresponding to continuous non-repetitive fields made of thousands of trees with full parallax. Our representation draws on volumetric textures and aperiodic tiling: the forest consists of a set of edgecompatible prisms containing forest samples which are aperiodically mapped onto the ground. The representation allows for quality rendering, thanks to appropriate 3D non-linearfiltering. It relies on LODs and on a GPUfriendly structure to achieve real-time performance. Dynamic lighting and shadowing are beyond the scope of this paper. On the other hand, we require no advanced graphics feature except 3D textures and decent fill and vertex transform rates. However we can take advantage of vertex shaders so that the slicing of the volumetric texture is entirely done on the GPU.
  • Item
    An Interactive Out-of-Core Rendering Framework for Visualizing Massively Complex Models
    (The Eurographics Association, 2004) Wald, Ingo; Dietrich, Andreas; Slusallek, Phlipp; Alexander Keller and Henrik Wann Jensen
    With the tremendous advances in both hardware capabilities and rendering algorithms, rendering performance is steadily increasing. Even consumer graphics hardware can render many million triangles per second. However, scene complexity seems to be rising even faster than rendering performance, with no end to even more complex models in sight. In this paper, we are targeting the interactive visualization of the "Boeing 777" model, a highly complex model of 350 million individual triangles, which - due to its sheer size and complex internal structure - simply cannot be handled satisfactorily by today's techniques. To render this model, we use a combination of real-time ray tracing, a low-level out of core caching and demand loading strategy, and a hierarchical, hybrid volumetric/lightfield-like approximation scheme for representing not-yet-loaded geometry. With this approach, we are able to render the full 777 model at several frames per second even on a single commodity desktop PC.
  • Item
    A Framework for Multiperspective Rendering
    (The Eurographics Association, 2004) Yu, J.; McMillan, L.; Alexander Keller and Henrik Wann Jensen
    We present a framework for the direct rendering of multiperspective images. We treat multiperspective imaging systems as devices for capturing smoothly varying set of rays, and we show that under an appropriate parametrization, multiperspective images can be characterized as continuous manifolds in ray space. We use a recently introduced class of General Linear Cameras (GLC), which describe all 2D linear subspaces of rays, as primitives for constructing multiperspective images. We show GLCs when constrained by an appropriate set of rules, can be laid out to tile the image plane and, hence, generate arbitrary multiperspective renderings. Our framework can easily render a broad class of multiperspective images, such as multiperspective panoramas, neocubist style renderings, and faux-animations from still-life scenes. We also show a method to minimize distortions in multiperspective images by uniformly sampling rays on a sampling plane even when they do not share a common origin.
  • Item
    Real-time appearance preserving out-of-core rendering with shadows
    (The Eurographics Association, 2004) Guthe, Michael; Borodin, Pavel; Balázs, Ákos; Klein, Reinhard; Alexander Keller and Henrik Wann Jensen
    Despite recent advances in finding efficient LOD-representations for gigantic 3D objects, rendering of complex, gigabyte-sized models and environments is still a challenging task, especially under real-time constraints and high demands on the visual accuracy. The two general approaches are using either a polygon- or a point-based representation for the simplified geometry. With the polygon-based approaches high frame rates can be achieved by sacrificing the exact appearance and thus the image quality. Point-based approaches on the other hand preserve higher image quality at the cost of higher primitive counts and therefore lower frame rates. In this paper we present a new hybrid point-polygon LOD algorithm for real-time rendering of complex models and environments including shadows. While rendering different LODs, we preserve the appearance of an object by using a novel error measure for simplification which allows us to steer the LOD generation in such a way that the geometric as well as the appearance deviation is bounded in image space. Additionally, to enhance the perception of the models shadows should be used. We present a novel LOD selection and prefetching method for real-time rendering of hard shadows. In contrast to the only currently available method for out-of-core shadow generation, our approach entirely runs on a single CPU system.
  • Item
    Image-Based Stereoscopic Painterly Rendering
    (The Eurographics Association, 2004) Stavrakis, E.; Gelautz, M.; Alexander Keller and Henrik Wann Jensen
    We present a new image-based stereoscopic painterly algorithm that we use to automatically generate stereoscopic paintings. Our work is motivated by contemporary painters who have explored the aesthetic implications of painting stereo pairs of canvases. We base our method on two real images, acquired from spatially displaced cameras. We derive a depth map by utilizing computer vision depth-from-stereo techniques and use this information to plan and render stereo paintings. These paintings can be viewed stereoscopically, in which case the pictorial medium is perceptually extended by the viewer to better suggest the sense of distance.
  • Item
    Rendering Procedural Terrain by Geometry Image Warping
    (The Eurographics Association, 2004) Dachsbacher, Carsten; Stamminger, Marc; Alexander Keller and Henrik Wann Jensen
    We describe an approach for rendering large terrains in real-time. A digital elevation map defines the rough shape of the terrain. During rendering, procedural geometric and texture detail is added by the graphics hardware. We show, how quad meshes can be generated quickly that have a locally varying resolution that is optimized for the inclusion of procedural detail.We obtain these distorted meshes by importance based warping of geometry images. The resulting quad mesh can then be rendered very efficiently by graphics hardware, which also adds all visible procedural detail using vertex and fragment programs.
  • Item
    Realtime Caustics Using Distributed Photon Mapping
    (The Eurographics Association, 2004) Günther, Johannes; Wald, Ingo; Slusallek, Philipp; Alexander Keller and Henrik Wann Jensen
    With the advancements in realtime ray tracing and new global illumination algorithms we are now able to render the most important illumination effects at interactive rates. One of the major remaining issues is the fast and efficient simulation of caustic illumination, such as e.g. the illumination from a car headlight. The photon mapping algorithm is a simple and robust approach that generates high-quality results and is the preferred algorithm for computing caustic illumination. However, photon mapping has a number of properties that make it rather slow on today's processors. Photon mapping has also been notoriously difficult to parallelize efficiently. In this paper, we present a detailed analysis of the performance issues of photon mapping together with significant performance improvements for all aspects of the photon mapping technique. The solution forms a complete framework for realtime photon mapping that efficiently combines realtime ray tracing, optimized and improved photon mapping algorithms, and efficient parallelization across commodity PCs. The presented system achieves realtime photon mapping performance of up to 22 frames per second on non-trivial scenes, while still allowing for interactively updating all aspects of the scene, including lighting, material properties, and geometry.
  • Item
    Simulating Photon Mapping for Real-time Applications
    (The Eurographics Association, 2004) Larsen, Bent Dalgaard; Christensen, Niels Jørgen; Alexander Keller and Henrik Wann Jensen
    This paper introduces a novel method for simulating photon mapping for real-time applications. First we introduce a new method for selectively redistributing photons. Then we describe a method for selectively updating the indirect illumination. The indirect illumination is calculated using a new GPU acceleratedfinal gathering method and the illumination is then stored in light maps. Caustic photons are traced on the CPU and then drawn using points in the framebuffer, and finally filtered using the GPU. Both diffuse and non-diffuse surfaces can be handled by calculating the direct illumination on the GPU and the photon tracing on the CPU. We achieve real-time frame rates for dynamic scenes.
  • Item
    An Irradiance Atlas for Global Illumination in Complex Production Scenes
    (The Eurographics Association, 2004) Christensen, Per H.; Batali, Dana; Alexander Keller and Henrik Wann Jensen
    We introduce a tiled 3D MIP map representation of global illumination data. The representation is an adaptive, sparse octree with a "brick" at each octree node; each brick consists of 8 <sup>3</sup> voxels with sparse irradiance values. The representation is designed to enable efficient caching. Combined with photon tracing and recent advances in distribution ray tracing of very complex scenes, the result is a method for efficient and flexible computation of global illumination in very complex scenes. The method can handle scenes with many more textures, geometry, and photons than could fit in memory. We show an example of a CG movie scene that has been retrofitted with global illumination shading using our method.
  • Item
    Anti-aliasing and Continuity with Trapezoidal Shadow Maps
    (The Eurographics Association, 2004) Martin, Tobias; Tan, Tiow-Seng; Alexander Keller and Henrik Wann Jensen
    This paper proposes a new shadow map technique termed trapezoidal shadow maps to calculate high quality shadows in real-time applications. To address the resolution problem of the standard shadow map approach, our technique approximates the eye's frustum as seen from the light with a trapezoid to warp it onto a shadow map. Such a trapezoidal approximation, which may first seem straightforward, is carefully designed to achieve the goal of good shadow quality for objects from near to far, and to address the continuity problem that is found in all existing shadow map approaches. The continuity problem occurs mainly when the shadow map quality changes significantly from frame to frame due to the motion of the eye or the light. This results in flickering of shadows. On the whole, our proposed approach is simple to implement without using complex data structures and it maps well to graphics hardware as shown in our experiments with large virtual scenes of hundreds of thousands to over a million of triangles.
  • Item
    Light Space Perspective Shadow Maps
    (The Eurographics Association, 2004) Wimmer, Michael; Scherzer, Daniel; Purgathofer, Werner; Alexander Keller and Henrik Wann Jensen
    In this paper, we present a new shadow mapping technique that improves upon the quality of perspective and uniform shadow maps. Our technique uses a perspective transform specified in light space which allows treating all lights as directional lights and does not change the direction of the light sources. This gives all the benefits of the perspective mapping but avoids the problems inherent in perspective shadow mapping like singularities in post-perspective space, missed shadow casters etc. Furthermore, we show that both uniform and perspective shadow maps distribute the perspective aliasing error that occurs in shadow mapping unequally over the available depth range. We therefore propose a transform that equalizes this error and gives equally pleasing results for near and far viewing distances. Our method is simple to implement, requires no scene analysis and is therefore as fast as uniform shadow mapping.
  • Item
    A Self-Shadow Algorithm for Dynamic Hair using Density Clustering
    (The Eurographics Association, 2004) Mertens, Tom; Kautz, Jan; Bekaert, Philippe; Reeth, Frank Van; Alexander Keller and Henrik Wann Jensen
    Self-shadowing is an important factor in the appearance of hair and fur. In this paper we present a new rendering algorithm to accurately compute shadowed hair at interactive rates using graphics hardware. No constraint is imposed on the hair style, and its geometry can be dynamic. Similar to previously presented methods, a 1D visibility function is constructed for each line of sight of the light source view. Our approach differs from other work by treating the hair geometry as a 3D density field, which is sampled on the fly using simple rasterization. The rasterized fragments are clustered, effectively estimating the density of hair along a ray. Based hereon, the visibility function is constructed. We show that realistic selfshadowing of thousands of individual dynamic hair strands can be rendered at interactive rates using consumer graphics hardware.
  • Item
    A Lixel for every Pixel
    (The Eurographics Association, 2004) Chong, Hamilton Y.; Gortler, Steven J.; Alexander Keller and Henrik Wann Jensen
    Shadow mapping is a very useful tool for generating shadows in many real-time rendering settings and is even used in some off-line renderers. One of the difficulties when using a shadow map is obtaining a sufficiently dense sampling on shadowed surfaces to minimize shadow aliasing. Endlessly upping the light-image resolution is not always a viable option. In this paper we describe a shadow mapping technique that guarantees, that over a small number of chosen planes of interest (such as a floor and a wall), the shadow map is, in fact, perfectly sampled, ie. for each pixel in the viewer camera, there will be exactly one lixel in the shadow map that samples the exact same geometric point.
  • Item
    Alias-Free Shadow Maps
    (The Eurographics Association, 2004) Aila, Timo; Laine, Samuli; Alexander Keller and Henrik Wann Jensen
    In this paper we abandon the regular structure of shadow maps. Instead, we transform the visible pixels P(x, y, z) from screen space to the image plane of a light source P0(x0, y0, z0). The (x0, y0) are then used as sampling points when the geometry is rasterized into the shadow map. This eliminates the resolution issues that have plagued shadow maps for decades, e.g., jagged shadow boundaries. Incorrect self-shadowing is also greatly reduced, and semi-transparent shadow casters and receivers can be supported. A hierarchical software implementation is outlined
  • Item
    Hemispherical Rasterization for Self-Shadowing of Dynamic Objects
    (The Eurographics Association, 2004) Kautz, Jan; Lehtinen, Jaakko; Aila, Timo; Alexander Keller and Henrik Wann Jensen
    We present a method for interactive rendering of dynamic models with self-shadows due to time-varying, lowfrequency lighting environments. In contrast to previous techniques, the method is not limited to static or preanimated models. Our main contribution is a hemispherical rasterizer, which rapidly computes visibility by rendering blocker geometry into a 2D occlusion mask with correct occluder fusion. The response of an object to the lighting is found by integrating the visibility function at each of the vertices against the spherical harmonic functions and the BRDF. This yields transfer coefficients that are then multiplied by the lighting coefficients to obtain the final, shadowed exitant radiance. No precomputation is necessary and memory requirements are modest. The method supports both diffuse and glossy BRDFs.
  • Item
    An Efficient Hybrid Shadow Rendering Algorithm
    (The Eurographics Association, 2004) Chan, Eric; Durand, Fredo; Alexander Keller and Henrik Wann Jensen
    We present a hybrid algorithm for rendering hard shadows accurately and efficiently. Our method combines the strengths of shadow maps and shadow volumes. We first use a shadow map to identify the pixels in the image that lie near shadow discontinuities. Then, we perform the shadow-volume computation only at these pixels to ensure accurate shadow edges. This approach simultaneously avoids the edge aliasing artifacts of standard shadow maps and avoids the high fillrate consumption of standard shadow volumes. The algorithm relies on a hardware mechanism for rapidly rejecting non-silhouette pixels during rasterization. Since current graphics hardware does not directly provide this mechanism, we simulate it using available features related to occlusion culling and show that dedicated hardware support requires minimal changes to existing technology.
  • Item
    A spectral-particle hybrid method for rendering falling snow
    (The Eurographics Association, 2004) Langer, M. S.; Zhang, L.; Klein, A.W.; Bhatia, A.; Pereira, J.; Rekhi, D.; Alexander Keller and Henrik Wann Jensen
    Falling snow has the visual property that it is simultaneously a set of discrete moving particles as well as a dynamic texture. To capture the dynamic texture properties of falling snow using particle systems can, however, require so many particles that it severely impacts rendering rates. Here we address this limitation by rendering the texture properties directly. We use a standard particle system to generate a relatively sparse set of falling snow flakes, and we then composite in a dynamic texture to fill in between the particles. The texture is generated using a novel image-based spectral synthesis method. The spectrum of the falling snow texture is defined by a dispersion relation in the image plane, derived from linear perspective. The dispersion relation relates image speed, image size, and particle depth. In the frequency domain, it relates the wavelength and speed of moving 2D image sinusoids. The parameters of this spectral snow can be varied both across the image and over time. This provides the flexibility to match the direction and speed parameters of the spectral snow to those of the falling particles. Camera motion can also be matched. Our method produces visually pleasing results at interactive rendering rates. We demonstrate our approach by adding snow effects to static and dynamic scenes. An extension for creating rain effects is also presented.
  • Item
    CC Shadow Volumes
    (The Eurographics Association, 2004) Lloyd, D. Brandon; Wendt, Jeremy; Govindaraju, Naga K.; Manocha, Dinesh; Alexander Keller and Henrik Wann Jensen
    We present a technique that uses culling and clamping (CC) for accelerating the performance of stencil-based shadow volume computation. Our algorithm reduces the fill requirements and rasterization cost of shadow volumes by reducing unnecessary rendering. A culling step removes shadow volumes that are themselves in shadow or do not contribute to thefinal image. Our novel clamping algorithms restrict shadow volumes to those regions actually containing shadow receivers. In this way, we avoid rasterizing shadow volumes over large regions of empty space. We utilize temporal coherence between successive frames to speed up clamping computations. Even with fairly coarse clamping we obtain substantial reduction in fill requirements and shadow rendering time in dynamic environments composed of up to a 100K triangles.
  • Item
    Hardware Accelerated Visibility Preprocessing using Adaptive Sampling
    (The Eurographics Association, 2004) Nirenstein, S.; Blake, E.; Alexander Keller and Henrik Wann Jensen
    We present a novel aggressive visibility preprocessing technique for general 3D scenes. Our technique exploits commodity graphics hardware and is faster than most conservative solutions, while simultaneously not overestimating the set of visible polygons. The cost of this benefit is that of potential image error. In order to reduce image error, we have developed an effective error minimization heuristic. We present results showing the application of our technique to highly complex scenes, consisting of many small polygons. We give performance results, an in depth error analysis using various metrics, and an empirical analysis showing a high degree of scalability. We show that our technique can rapidly compute from-region visibility (1hr 19min for a 5 million polygon forest), with minimal error (0.3% of image). On average 91.3% of the scene is culled.
  • Item
    All-focused light field rendering
    (The Eurographics Association, 2004) Kubota, Akira; Takahashi, Keita; Aizawa, Kiyoharu; Chen, Tsuhan; Alexander Keller and Henrik Wann Jensen
    We present a novel reconstruction method that can synthesize an all in-focus view from under-sampled light fields, significantly suppressing aliasing artifacts. The presented method consists of two steps; 1) rendering multiple views at a given view point by performing light field rendering with different focal plane depths; 2) iteratively reconstructing the all in-focus view by fusing the multiple views. We model the multiple views and the desired all in-focus view as a set of linear equations with a combination of textures at the focal depths. Aliasing artifacts can be modeled as spatially (shift) varying filters. We can solve this set of linear equations by using an iterative reconstruction approach. This method effectively integrates focused regions in each view into an all in-focus view without any local processing steps such as estimation of depth or segmentation of the focused regions.
  • Item
    A Self-Reconfigurable Camera Array
    (The Eurographics Association, 2004) Zhang, Cha; Chen, Tsuhan; Alexander Keller and Henrik Wann Jensen
    This paper presents a self-reconfigurable camera array system that captures video sequences from an array of mobile cameras, renders novel views on the fly and reconfigures the camera positions to achieve better rendering quality. The system is composed of 48 cameras mounted on mobile platforms. The contribution of this paper is twofold. First, we propose an efficient algorithm that is capable of rendering high-quality novel views from the captured images. The algorithm reconstructs a view-dependent multi-resolution 2D mesh model of the scene geometry on the fly and uses it for rendering. The algorithm combines region of interest (ROI) identification, JPEG image decompression, lens distortion correction, scene geometry reconstruction and novel view synthesis seamlessly on a single Intel Xeon 2.4 GHz processor, which is capable of generating novel views at 4-10 frames per second (fps). Second, we present a view-dependent adaptive capturing scheme that moves the cameras in order to show even better rendering results. Such camera reconfiguration naturally leads to a nonuniform arrangement of the cameras on the camera plane, which is both view-dependent and scene-dependent.
  • Item
    Generalized Displacement Maps
    (The Eurographics Association, 2004) Wang, Xi; Tong, Xin; Lin, Stephen; Hu, Shimin; Guo, Baining; Shum, Heung-Yeung; Alexander Keller and Henrik Wann Jensen
    In this paper, we introduce a real-time algorithm to render the rich visual effects of general non-height-field geometric details, known as mesostructure. Our method is based on a five-dimensional generalized displacement map (GDM) that represents the distance of solid mesostructure along any ray cast from any point within a volumetric sample. With this GDM information, we propose a technique that computes mesostructure visibility jointly in object space and texture space which enables both control of texture distortion and efficient computation of texture coordinates and shadowing. GDM can be rendered with either local or global illumination as a per-pixel process in graphics hardware to achieve real-time rendering of general mesostructure.
  • Item
    Bixels: Picture Samples with Sharp Embedded Boundaries
    (The Eurographics Association, 2004) Tumblin, Jack; Choudhury, Prasun; Alexander Keller and Henrik Wann Jensen
    Pixels store a digital image as a grid of point samples that can reconstruct a limited-bandwidth continuous 2-D source image. Although convenient for anti-aliased display, these bandwidth limits irreversibly discard important visual boundary information that is difficult or impossible to accurately recover from pixels alone. We propose bixels instead: they also store a digital image as a grid of point samples, but each sample keeps 8 extra bits to set embedded geometric boundaries that are infinitely sharp, more accurately placed, and directly machine-readable. Bixels represent images as piecewise-continuous, with discontinuous intensities and gradients at boundaries that form planar graphs. They reversibly combine vector and raster image features, decouple boundary sharpness from the number of samples used to store them, and do not mix unrelated but adjacent image contents, e.g blue sky and green leaf. Bixels are meant to be compatible with pixels. A bixel is a image sample point with an 8 bit code for local boundaries. We describe a boundary-switched bilinear filter kernel for bixel reconstruction and pre-filtering to find bixel samples, a bixels-to-pixels conversion method for display, and an iterative method to combine pixels and given boundaries to make bixels. We discuss applications in texture synthesis, matting and compositing. We demonstrate sharpness-preserving enlargement, warping and bixels-to-pixels conversion with example images.
  • Item
    Feature-Based Textures
    (The Eurographics Association, 2004) Ramanarayanan, G.; Bala, K.; Walter, B.; Alexander Keller and Henrik Wann Jensen
    This paper introduces feature-based textures, a new image representation that combines features and samples for high-quality texture mapping. Features identify boundaries within an image where samples change discontinuously. They can be extracted from vector graphics representations, or explicitly added to raster images to improve sharpness. Texture lookups are then interpolated from samples while respecting these boundaries. We present results from a software implementation of this technique demonstrating quality, efficiency and low memory overhead.
  • Item
    Progressively-Refined Reflectance Functions from Natural Illumination
    (The Eurographics Association, 2004) Matusik, Wojciech; Loper, Matthew; Pfister, Hanspeter; Alexander Keller and Henrik Wann Jensen
    In this paper we present a simple, robust, and efficient algorithm for estimating reflectance fields (i.e., a description of the transport of light through a scene) for a fixed viewpoint using images of the scene under known natural illumination. Our algorithm treats the scene as a black-box linear system that transforms an input signal (the incident light) into an output signal (the reflected light). The algorithm is hierarchical - it progressively refines the approximation of the reflectance field with an increasing number of training samples until the required precision is reached. Our method relies on a new representation for reflectance fields. This representation is compact, can be progressively refined, and quickly computes the relighting of scenes with complex illumination. Our representation and the corresponding algorithm allow us to efficiently estimate the reflectance fields of scenes with specular, glossy, refractive, and diffuse elements. The method also handles soft and hard shadows, inter-reflections, caustics, and subsurface scattering. We verify our algorithm and representation using two measurement setups and several scenes, including an outdoor view of the city of Cambridge.
  • Item
    Combining Higher-Order Wavelets and Discontinuity Meshing: a Compact Representation for Radiosity
    (The Eurographics Association, 2004) Holzschuch, N.; Alonso, L.; Alexander Keller and Henrik Wann Jensen
    The radiosity method is used for global illumination simulation in diffuse scenes, or as an intermediate step in other methods. Radiosity computations using Higher-Order wavelets achieve a compact representation of the illumination on many parts of the scene, but are more expensive near discontinuities, such as shadow boundaries. Other methods use a mesh, based on the set of discontinuities of the illumination function. The complexity of this set of discontinuities has so far proven prohibitive for large scenes, mostly because of the difficulty to robustly manage a geometrically complex set of triangles. In this paper, we present a method for computing radiosity that uses higher-order wavelet functions as a basis, and introduces discontinuities only when they simplify the resulting mesh. The result is displayed directly, without post-processing.
  • Item
    Smooth Reconstruction and Compact Representation of Reflectance Functions for Image-based Relighting
    (The Eurographics Association, 2004) Masselus, Vincent; Peers, Pieter; Dutré, Philip; Willemsy, Yves D.; Alexander Keller and Henrik Wann Jensen
    In this paper we present a new method to reconstruct reflectance functions for image-based relighting. A reflectance function describes how a pixel in a photograph is observed depending on the incident illumination on the depicted object. Additionally we present a compact representation of the reconstructed reflectance functions. The reflectance functions are sampled from real objects by illuminating the object from a set of directions while recording photographs. Each pixel in a photograph is a sample of the reflectance function. Next, a smooth continuous function is reconstructed, using different reconstruction techniques, from the sampled reflectance function. The presented method maintains important high frequency features such as highlights and self-shadowing and ensures visually pleasing relit images, computed with incident illumination containing high and low frequency features. The reconstructed reflectance functions and incident illumination can be expressed by a common set of basis functions, enabling a significant speed-up of the relighting process. We use a non-linear approximation of higher order wavelets to preserve the smoothness of the reconstructed signal while maintaining good relit image quality. Our method improves on visual quality in comparison with previous image-based relighting methods, especially when animated incident illumination is used.
  • Item
    All-Frequency Precomputed Radiance Transfer for Glossy Objects
    (The Eurographics Association, 2004) Liu, Xinguo; Sloan, Peter-Pike; Shum, Heung-Yeung; Snyder, John; Alexander Keller and Henrik Wann Jensen
    We introduce a method based on precomputed radiance transfer (PRT) that allows interactive rendering of glossy surfaces and includes shadowing effects from dynamic, "all-frequency" lighting. Specifically, source lighting is represented by a cube map at resolution nL
  • Item
    Animatable Facial Reflectance Fields
    (The Eurographics Association, 2004) Hawkins, Tim; Wenger, Andreas; Tchou, Chris; Gardner, Andrew; Göransson, Fredrik; Debevec, Paul; Alexander Keller and Henrik Wann Jensen
    We present a technique for creating an animatable image-based appearance model of a human face, able to capture appearance variation over changing facial expression, head pose, view direction, and lighting condition. Our capture process makes use of a specialized lighting apparatus designed to rapidly illuminate the subject sequentially from many different directions in just a few seconds. For each pose, the subject remains still while six video cameras capture their appearance under each of the directions of lighting. We repeat this process for approximately 60 different poses, capturing different expressions, visemes, head poses, and eye positions. The images for each of the poses and camera views are registered to each other semi-automatically with the help of fiducial markers. The result is a model which can be rendered realistically under any linear blend of the captured poses and under any desired lighting condition by warping, scaling, and blending data from the original images. Finally, we show how to drive the model with performance capture data, where the pose is not necessarily a linear combination of the original captured poses.
  • Item
    A Novel Hemispherical Basis for Accurate and Efficient Rendering
    (The Eurographics Association, 2004) Gautron, Pascal; Krivanek, Jaroslav; Pattanaik, Sumanta; Bouatouch, Kadi; Alexander Keller and Henrik Wann Jensen
    This paper presents a new set of hemispherical basis functions dedicated to hemispherical data representation. These functions are derived from associated Legendre polynomials. We demonstrate the usefulness of this basis for representation of surface reflectance functions, rendering using environment maps and for efficient global illumination computation using radiance caching. We show that our basis is more appropriate for hemispherical functions than spherical harmonics. This basis can be efficiently combined with spherical harmonics in applications involving both hemispherical and spherical data.
  • Item
    Spherical Harmonic Gradients for Mid-Range Illumination
    (The Eurographics Association, 2004) Annen, Thomas; Kautz, Jan; Durand, Frédo; Seidel, Hans-Peter; Alexander Keller and Henrik Wann Jensen
    Spherical harmonics are often used for compact description of incident radiance in low-frequency but distant lighting environments. For interaction with nearby emitters, computing the incident radiance at the center of an object only is not sufficient. Previous techniques then require expensive sampling of the incident radiance field at many points distributed over the object. Our technique alleviates this costly requirement using a first-order Taylor expansion of the spherical-harmonic lighting coefficients around a point. We propose an interpolation scheme based on these gradients requiring far fewer samples (one is often sufficient). We show that the gradient of the incident-radiance spherical harmonics can be computed for little additional cost compared to the coefficients alone. We introduce a semi-analytical formula to calculate this gradient at run-time and describe how a simple vertex shader can interpolate the shading. The interpolated representation of the incident radiance can be used with any low-frequency light-transfer technique.
  • Item
    Practical Rendering of Multiple Scattering Effects in Participating Media
    (The Eurographics Association, 2004) Premoze, Simon; Ashikhmin, Michael; Tessendorf, Jerry; Ramamoorthi, Ravi; Nayar, Shree; Alexander Keller and Henrik Wann Jensen
    Volumetric light transport effects are significant for many materials like skin, smoke, clouds, snow or water. In particular, one must consider the multiple scattering of light within the volume. While it is possible to simulate such media using volumetric Monte Carlo or finite element techniques, those methods are very computationally expensive. On the other hand, simple analytic models have so far been limited to homogeneous and/or optically dense media and cannot be easily extended to include strongly directional effects and visibility in spatially varying volumes. We present a practical method for rendering volumetric effects that include multiple scattering. We show an expression for the point spread function that captures blurring of radiance due to multiple scattering. We develop a general framework for incorporating this point spread function, while considering inhomogeneous media - this framework could also be used with other analytic multiple scattering models.
  • Item
    Lattice-Boltzmann Lighting
    (The Eurographics Association, 2004) Geist, Robert; Rasche, Karl; Westall, James; Schalkoff, Robert; Alexander Keller and Henrik Wann Jensen
    A new technique for lighting participating media is suggested. The technique is based on the lattice-Boltzmann method, which is gaining popularity as alternative to finite-element methods for flow computations, due to its ease of implementation and ability to handle complex boundary conditions. A relatively simple, grid-based photon transport model is postulated and then shown to describe, in the limit, a diffusion process. An application to lighting clouds is provided, where cloud densities are generated by combining two well-established techniques. Performance of the new lighting technique is not real-time, but the technique is highly parallel and does offer an ability to easily represent complex scattering events. Sample renderings are included.
  • Item
    All-Frequency Relighting of Non-Diffuse Objects using Separable BRDF Approximation
    (The Eurographics Association, 2004) Wang, Rui; Tran, John; Luebke, David; Alexander Keller and Henrik Wann Jensen
    This paper presents a technique, based on pre-computed light transport and separable BRDF approximation, for interactive rendering of non-diffuse objects under all-frequency environment illumination. Existing techniques using spherical harmonics to represent environment maps and transport functions are limited to low-frequency light transport effects. Non-linear wavelet lighting approximation is able to capture all-frequency illumination and shadows for geometry relighting, but interactive rendering is currently limited to diffuse objects. Our work extends the wavelet-based approach to relighting of non-diffuse objects. We factorize the BRDF using separable decomposition and keep only a few low-order approximation terms, each consisting of a 2D light map paired with a 2D view map. We then pre-compute light transport matrices corresponding to each BRDF light map, and compress the data with a non-linear wavelet approximation. We use modern graphics hardware to accelerate precomputation. At run-time, a sparse light vector is multiplied by the sparse transport matrix at each vertex, and the results are further combined with texture lookups of the view direction into the BRDF view maps to produce view-dependent color. Using our technique, we demonstrate rendering of objects with several non-diffuse BRDFs under all-frequency, dynamic environment lighting at interactive rates.
  • Item
    Efficient Rendering of Atmospheric Phenomena
    (The Eurographics Association, 2004) Riley, Kirk; Ebert, David S.; Kraus, Martin; Tessendorf, Jerry; Hansen, Charles; Alexander Keller and Henrik Wann Jensen
    Rendering of atmospheric bodies involves modeling the complex interaction of light throughout the highly scattering medium of water and air particles. Scattering by these particles creates many well-known atmospheric optical phenomena including rainbows, halos, the corona, and the glory. Unfortunately, most radiative transport approximations in computer graphics are ill-suited to render complex angularly dependent effects in the presence of multiple scattering at reasonable frame rates. Therefore, this paper introduces a multiple-model lighting system that efficiently captures these essential atmospheric effects. We have solved the rendering of fine angularly dependent effects in the presence of multiple scattering by designing a lighting approximation based upon multiple scattering phase functions. This model captures gradual blurring of chromatic atmospheric optical phenomena by handling the gradual angular spreading of the sunlight as it experiences multiple scattering events with anisotropic scattering particles. It has been designed to take advantage of modern graphics hardware; thus, it is capable of rendering these effects at near interactive frame rates.
  • Item
    An Analytical Model for Skylight Polarisation
    (The Eurographics Association, 2004) Wilkie, A.; Ulbricht, C.; Tobler, Robert F.; Zotti, G.; Purgathofer, W.; Alexander Keller and Henrik Wann Jensen
    Under certain circumstances the polarisation state of the illumination can have a significant influence on the appearance of scenes; outdoor scenes with specular surfaces - such as water bodies or windows - under clear, blue skies are good examples of such environments. In cases like that it can be essential to use a polarising renderer if a true prediction of nature is intended, but so far no polarising skylight models have been presented. This paper presents a plausible analytical model for the polarisation of the light emitted from a clear sky. Our approach is based on a suitable combination of several components with well-known characteristics, and yields acceptable results in considerably less time than an exhaustive simulation of the underlying atmospheric scattering phenomena would require.