9 results
Search Results
Now showing 1 - 9 of 9
Item Ray Tracing Rational B-Spline Patches in VLSI(The Eurographics Association, 1987) Schneider, Bengt-Olaf; Fons Kuijk and Wolfgang StrasserRational B-spline surfaces make it possible to merge the concepts of freeform surfaces and that of surfaces described by rational polynomials especially conic sections. For ray tracing it is crucial to determine the intersection between ray and object. Therefore an algorithm is developed that is suitable for a VLSI implementation. Some alternatives for the implementation of this algorithm are presented and discussed. The paper concludes with a discussion of some problems and possible further developments.Item Rendering and Visualization in Affordable Parallel Environments(Eurographics Association, 1998) Bartz, Dirk; Silva, Claudio; Schneider, Bengt-OlafThe scope of this full-day tutorial is the use of low and medium-cost parallel environments (less than US $ 60K) for high-speed rendering and visualization. In particular, our focus is on the parallel graphics programming of multi-processor PCs or workstations, and networks of both. The current technology push in the consumer market for graphics hardware, small multiprocessor machines, and fast networks is bound to make all of these components less expensive. In this tutorial, attendees will learn how to leverage these advances in consumer hardware to achieve faster rendering by using parallel rendering algorithms, and off-the-shelf software systems. This course will briefly touch the necessary tools to make basic use of this technology: parallel programming paradigms (shared memory, message passing) and parallel rendering algorithms (including image-, object-, and time- space parallelism). Advantages and issues of the different methods will be discussed on several examples of polygonal graphics and volume rendering.Item Tutorial 6 - Rendering and Visualization in Parallel Environments(Eurographics Association, 1999) Bartz, Dirk; Schneider, Bengt-Olaf; Silva, ClaudioThe continuing commoditization of the computer market has precipitated a qualitative change. Increasingly powerful processors, large memories, big harddisk, high-speed networks, and fast 3D rendering hardware are now affordable without a large capital outlay. A new class of computers, dubbed Personal Workstations, has joined the traditional technical workstation as a platform for 3D modeling and rendering. In this tutorial, attendees will learn how to understand and leverage both technical and personal workstations as components of parallel rendering systems. The goal of the tutorial is twofold: Attendees will thoroughly understand the important characteristics workstations architectures. We will present an overview of different workstation architectures, with special emphasis on current technical and personal workstations, addressing both single-processors as well as SMP architectures. We will also introduce important methods of programming in parallel environment with special attention how such techniques apply to developing parallel renderers. Attendees will learn about different approaches to implement parallel renderers. The tutorial will cover parallel polygon rendering and parallel volume rendering. We will explain the underlying concepts of workload characterization, workload partitioning, and static, dynamic, and adaptive load balancing. We will then apply these concepts to characterize various parallelization strategies reported in the literature for polygon and volume rendering. We abstract from the actual implementation of these strategies and instead focus on a comparison of their benefits and drawbacks. Case studies will provide additional material to explain the use of these techniques. The tutorial will be structured into two main sections: We will first discuss the fundamentals of parallel programming and parallel machine architectures. Topics include message passing vs. shared memory, thread programming, a review of different SMP architectures, clustering techniques, PC architectures for personal workstations, and graphics hardware architectures. The second section builds on this foundation to describe key concepts and particular algorithms for parallel polygon rendering and parallel volume rendering.Item PROOF: An Architecture for Rendering In Object Space(The Eurographics Association, 1988) Schneider, Bengt-Olaf; Claussen, Ute; A. A. M.KuijkThis paper gives a short introduction into the field of computer image generation in hardware. It discusses the two main approaches, namely partitioning in Image space and In object space. Based on the object space partitioning approach we have defined the PROOF architecture. PROOF is a system that aims at high performance and high quality rendering of raster images. high performance means that up to 30 pictures are generated in one second. The pictures are shaded and anti-allased, giving the images a high degree of realism. The architecture comprises tnree stages which are responsible for hidden surface removal, shading, and filtering respectively. The first of these stages a pipeline of object processors. Each of these processors stores and scan converts one obiect Furthermore, It interpolates the depth and the normal vector across the Object. Each object processor IS able to handle objects of a certain primitive type. The specialization of an object processor to a certain primitive type is encapsulated in a Single block called primitive processor. The Outout of the object processor pipeline is the input to a stage for shading. The illumination model employed takes In~o account both diffuse and specular reflections. The paper reviews Gouraud and Phong shading with regard to their suitability for a hardware implementation. The final stage of the PROOF system is formed by a stage for filtering the colours of those objects that contribute to a pixel. This done by constructing a subpixel mask and filtering across an area of 2x2 pixels. At the end. the paper briefly reports on the current state of the project.Item Siggraph/Eurographics Workshop on Graphics Hardware(Blackwell Publishers Ltd and the Eurographics Association, 1998) Schneider, Bengt-OlafItem A Processor for an Object-Oriented Rendering System(Blackwell Publishing Ltd and the Eurographics Association, 1988) Schneider, Bengt-OlafAn object oriented approach to the real-time rendering of raster images is described. The architecture is based on a set of object processors arranged in a pipeline. These object processors scan convert the objects in the scene and interpolate depth and normal vectors across the objects. Hidden surface elimination is distributed along the pipeline. The structure of the object processors and of the algorithms employed are described in detail.Computing Reviews Classification: C. 1.2, C.3, 1.3.1Item Towards a Taxonomy for Display Processors(The Eurographics Association, 1989) Schneider, Bengt-Olaf; Richard Grimsdale and Wolfgang StrasserImage generation for raster displays proceeds in two main steps: geometry processing and pixel processing. The snbsystem performing the pixel processing is called display processor.In the paper a model for the displa.y processor is developed that takes into account both function and timing properties. The model identifies scan conversion, hidden surface removal, shading and anti-aliasing as tile key functions of the display processor. The timing model is expressed in an inequation being fundamental for all display processor architectures.On the basis of that model a taxonomy is presented which classifies display processors according to four main criteria: function, partitioning, a.rchitecture and performance.The taxonomy is applied to five real display processors: Pixel-planes, SLAM, PROOF, the Ray-Casting Machine and the Structured Frame Store System.Investigation of existing display processor architectures on the basis of the devel oped taxonomy revealed a potential new architecture. This architecture partitions the image generation process ill image space and employs a. tree topology.Item M-Buffer: A Flexible MISD Architecture for AdvancedGraphics(The Eurographics Association, 1992) Schneider, Bengt-Olaf; Rossignac, Jarek; P F ListerContemporary graphics architectures are based on a hardware-supported geometric pipeline, a rasterizer, a z-buffer and two frame buffers. Additional pixel memory isused for alpha blending and for storing logical information. Although their functionality is growing it is still limited because of the fixed use of pixel memory and there stricted set of operations provided by these architectures. A new class of graphicsalgorithms that considerably extends the current technology is based on a moreflexible use of pixel memory, not supported by current architectures.The M-Buffer architecture described here divides pixel memory into general-purposebuffers, each associated with one processor. Pixel data is broadcast to all buffers simultaneously. Logical and numeric tests are performed by each processor and theresults are broadcast and used by all buffers in parallel to evaluate logical expressionsfor the pixel update condition.The architecture is scalable by addition of buffer-processors, suitable for pixel parallelization,and permits the use of buffers for different purposes. The architecture, its functional description, and a powerful programming interface are described.Item Accelerating Polygon Clipping(The Eurographics Association, 1992) Schneider, Bengt-Olaf; P F ListerPolygon clipping is a central part of image generation and image visualization systems.In spite of its algorithmic simplicity it consumes a considerable amount of hardware or software resources. Polygon clipping performance is dominated by two processes: intersection calculations and data transfers. The paper analyzes the prevalent Sutherland-Hodgman algorithm for polygon clippingand identifies cases for which this algorithm performs inefficiently. Such casesare characterized by subsequent vertices in the input polygon that share a commonregion, e. g. a common halfspace.The paper will present new techniques that detect such constellations and simplifythe input polygon such that the Sutherland-Hodgman algorithm runs more efficiently. Block diagrams and pseudo-code demonstrate that the new techniques are well suited for both hardware and software implementations. Finally, the paper discusses the results of a prototype implementation of the presented techniques. The analysis compares the performance of the new techniquesto the traditional Sutherland-Hodgman algorithm for different test scenes. The new techniques reduce the number data transfers by up to 90 % and the number of intersection calculations by up to 60 %.