STARsEurographics 2000 - STARshttps://diglib.eg.org:443/handle/10.2312/2212024-03-29T06:37:13Z2024-03-29T06:37:13ZShadow Computation: A Unified PerspectiveGhali, S.Fiume, E.Seidel, H.-P.https://diglib.eg.org:443/handle/10.2312/egst200010282022-03-28T11:56:16Z2000-01-01T00:00:00ZShadow Computation: A Unified Perspective
Ghali, S.; Fiume, E.; Seidel, H.-P.
Methods for solving shadow problems by solving instances of visibility problems have long been known and exploited. There are, however, other potent uses of such a reduction of shadow problems, several of which we explore in this paper. Specifically, we describe algorithms that use a resolution–independent, or object–space, visibility structure for the computation of object–space shadows under point, linear, and area light sources. The connection between object–space visibility and shadow computation is well–known in computer graphics. We show how that fundamental observation can be recast and generalized within an object–space visibility structure. The edges in such a structure contain exactly the information needed to determine shadow edges under a point light source. Also, the locations along a linear or an area light source at which visibility changes (termed critical points and critical lines) provide the necessary information for computing shadow edges resulting from linear and area light sources. Not only are instances of all shadow problems thus reduced to visibility problems, but instances of shadow problems under linear and area light sources are also reduced to instances of shadow generation under point and linear light sources, respectively.
2000-01-01T00:00:00ZGeometric Signal Processing on Polygonal MeshesTaubin, G.https://diglib.eg.org:443/handle/10.2312/egst200010292022-03-28T11:56:16Z2000-01-01T00:00:00ZGeometric Signal Processing on Polygonal Meshes
Taubin, G.
Very large polygonal models, which are used in more and more graphics applications today, are routinely generated by a variety of methods such as surface reconstruction algorithms from 3D scanned data, isosurface construction algorithms from volumetric data, and photogrametric methods from aerial photography. In this report we provide an overview of several closely related methods developed during the last few yers, to smooth, denoise, edit, compress, transmit, and animate very large polygonal models.
2000-01-01T00:00:00ZInteractive Display of Global Illumination Solutions for Non-Diffuse EnvironmentsHeidrich, Wolfganghttps://diglib.eg.org:443/handle/10.2312/egst200010252022-03-28T11:56:16Z2000-01-01T00:00:00ZInteractive Display of Global Illumination Solutions for Non-Diffuse Environments
Heidrich, Wolfgang
In recent years there has been a lot of work on interactively displaying global illumination solutions for nondiffuse environments. This is an extremely active field of research, in which a lot of different approaches have been proposed recently. In this State-of-The-Art-Report, we will discuss and compare these. This will hopefully lay the ground for systematically addressing the open questions in the future.
2000-01-01T00:00:00ZVisual Perception in Realistic Image SynthesisMcNamara, A.Chalmers, A.Trocianko, T.https://diglib.eg.org:443/handle/10.2312/egst200010262022-03-28T11:56:16Z2000-01-01T00:00:00ZVisual Perception in Realistic Image Synthesis
McNamara, A.; Chalmers, A.; Trocianko, T.
Realism is often a primary goal in computer graphics imagery, we strive to create images that are perceptually indistinguishable from an actual scene. Rendering systems can now closely approximate the physical distribution of light in an environment. However, physical accuracy does not guarantee that the displayed images will have authentic visual appearance. In recent years the emphasis in realistic image synthesis has begun to shift from the simulation of light in an environment to images that look as real as the physical environment they portray. In other words the computer image should be not only physically correct but also perceptually equivalent to the scene it represents. This implies aspects of the Human Visual System (HVS) must be considered if realism is required. Visual perception is employed in many different guises in graphics to achieve authenticity. Certain aspects of the human visual system must be considered to identify the perceptual effects that a realistic rendering system must achieve in order to effectively reproduce a similar visual response to a real scene. This state-of-the-art report outlines the manner in which knowledge about visual perception is increasingly appearing in state-of-the-art realistic image synthesis. This STAR is organised into three sections, each exploring the use of perception in realistic image synthesis, each with slightly different emphasis and application. First, perception driven rendering algorithms are described, these algorithms focus on embedding models of the Human Visual System (HVS) directly into global illumination computations in order to improve their efficiency. Then perception based image quality metrics, which aim to compare images on a perceptual rather than physical basis are presented. These metrics can be used to evaluate, validate and compare imagery. Finally, Tone Reproduction Operators, which attempt to map the vast range of computed radiance values to the limited range of display values, are discussed.
2000-01-01T00:00:00Z