Open Issues in Photo-realistic Rendering
MetadataShow full item record
For more than two decades Computer graphics researchers have tried to achieve photo-realism in their images as reliable as possible, mainly by simulating the physical laws of light and adding one effect after the other. The recent years have brought a change of efforts towards real-time methods, easy-to-use systems, integration with vision, modelling tools and the like. The quality of images is mostly accepted as sufficient for real world applications, but where are we really? There are still numerous problems to be solved, and there is notable progress in these areas. No question, the plug-in philosophy of some commercial products has enabled several of these new techniques to be distributed quite fast. But unfortunately, many other of these developments happen in isolated systems for the pure purpose of publication, and never make it into commercial software. This presentation wants to make people more aware of such activities, and evaluate the steps we still have to go towards perfect photo-realism. The talk will start with an attempt to give a brief overview of the rendering history, highlighting the main research directions at different times. It will explain the driving forces of the developments, which are complexity, speed, and accuracy, and maybe also expression in recent years. Solved and unsolved areas are examined, and compared to practically solved but theoretically incomplete topics such as translucency, tone mapping, light source and BTF descriptions, and error metrics for image quality evaluation. The difference lies mainly in the difference between believable, correct, and predictive images. Also, for really realistic images modelling complexity is still an issue. Finally, some recent work on polarization and fluorescence is presented.