Smarter screen space shading

dc.contributor.authorNalbach, Oliver
dc.date.accessioned2018-01-15T13:26:15Z
dc.date.available2018-01-15T13:26:15Z
dc.date.issued2017-11-10
dc.description.abstractThis dissertation introduces a range of new methods to produce images of virtual scenes in a matter of milliseconds. Imposing as few constraints as possible on the set of scenes that can be handled, e.g., regarding geometric changes over time or lighting conditions, precludes pre-computations and makes this a particularly difficult problem. We first present a general approach, called deep screen space, using which a variety of light transport aspects can be simulated within the aforementioned setting. This approach is then further extended to additionally handle scenes containing participating media like clouds. We also show how to improve the correctness of deep screen space and related algorithms by accounting for mutual visibility of points in a scene. After that, we take a completely different point of view on image generation using a learning-based approach to approximate a rendering function. We show that neural networks can hallucinate shading effects which otherwise have to be computed using costly analytic computations. Finally, we contribute a holistic framework to deal with phosphorescent materials in computer graphics, covering all aspects from acquisition of real materials, to easy editing, to image synthesis.en_US
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/2632112
dc.language.isoenen_US
dc.titleSmarter screen space shadingen_US
dc.typeThesisen_US
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
thesis.pdf
Size:
188.98 MB
Format:
Adobe Portable Document Format
Description:
Collections