3 results
Search Results
Now showing 1 - 3 of 3
Item Perceived Rendering Thresholds for High-Fidelity Graphics on Small Screen Devices(The Eurographics Association, 2006) Aranha, M.; Debattista, K.; Chalmers, A.; Hill, S.; Louise M. Lever and Mary McDerbySmall screen devices, also known as small-form-factor (SFF) devices including mobile phones and ultra mobile PCs are increasingly ubiquitous. Their uses includes gaming, navigation and interactive visualisation. SFF devices are, however, inherently limited by their physical characteristics for perception as well as limited processing and battery power. High-fidelity graphic systems have significant computational requirements which can be reduced through use of perceptually-based rendering techniques. In order to exploit these techniques on SFF devices a sound understanding of the perceptual characteristics of the display device is needed. This paper investigates the perceived rendering threshold specific for SFF devices in comparison to traditional display devices. We show that the threshold for SFF systems differs significantly from typical displays indicating substantial savings in rendering quality and thus computational resources can be achieved for SFF devices.Item Selective Parallel Rendering for High-Fidelity Graphics(The Eurographics Association, 2005) Debattista, K.; Sundstedt, V.; Pereira, F.; Chalmers, A.; Louise M. Lever and Mary McDerbyHigh-Fidelity rendering of complex scenes is one of the primary goals of computer graphics. Unfortunately, high- fidelity rendering is notoriously computationally expensive. In this paper we present a framework for high-fidelity rendering in reasonable time through our Rendering on Demand system. We bring together two of the main acceleration methods for rendering: selective rendering and parallel rendering. We present a selective rendering system which incorporates selective guidance. Amongst other things, the selective guidance system takes advantage of limitations in the human visual system to concentrate rendering efforts on the most perceptually important features in an image. Parallel rendering helps reduce the costs further by distributing the workload amongst a number of computational nodes. We present an implementation of our framework as an extension of the lighting simulation system Radiance, adding a selective guidance system that can exploit visual perception. Furthermore, we parallelise Radiance and its primary acceleration data structure, the irradiance cache, and also use the selective guidance to improve load balancing of the distributed workload. Our results demonstrate the effectiveness of the implementation and thus the potential of the rendering framework.Item Structured Image Techniques for Efficient High-Fidelity Graphics(The Eurographics Association, 2006) Yang, X.; Debattista, K.; Chalmers, A.; Louise M. Lever and Mary McDerbyGlobal illumination rendering in real-time for high-fidelity graphics remains one of the biggest challenges for computer graphics in the foreseeable future. Recent work has shown that significant amounts of time can be saved by selectively rendering in high quality only those parts of the image that are considered perceptually more important. Regions of the final rendering that are deemed more perceptually important can be identified through lower quality, but rapid, rasterisation rendering. By exploiting this prior knowledge of the scene and taking advantage of image space based algorithms to concentrate rendering on the more salient areas higher performance rendering may be achieved. In this paper, we present a selective rendering framework based on ray tracing for global illumination which uses a rapid image preview of the scene to identify important image regions, structures these regions and uses this knowledge to direct a fraction of the rays traditionally shot. The undersampled image is then reconstructed using algorithms from image processing. We demonstrate that while this approach is able to significantly reduce the amount of computation it still maintains a high perceptual image quality.