Live Inverse Rendering

No Thumbnail Available
Date
2020-02-03
Journal Title
Journal ISSN
Volume Title
Publisher
SciDok - Der Wissenschaftsserver der Universität des Saarlandes
Article
Abstract
The field of computer graphics is being transformed by the process of ‘personalization’. The advent of augmented and mixed reality technology is challenging the existing graphics systems, which traditionally required elaborate hardware and skilled artistic efforts. Now, photorealistic graphics are require to be rendered on mobile devices with minimal sensors and compute power, and integrated with the real world environment automatically. Seamlessly integrating graphics into real environments requires the estimation of the fundamental light transport components of a scene - geometry, reflectance and illumination. While estimating environmental geometry and self-localization on mobile devices has progressed rapidly, the task of estimating scene reflectance and illumination from monocular images or videos in real-time (termed live inverse rendering) is still at a nascent stage. The challenge is that of designing efficient representations and models for these appearance parameters and solving the resulting high-dimensional, non-linear and under-constrained system of equations at frame rate. This thesis comprehensively explores, for the first time, various representations, formulations, algorithms and systems for addressing these challenges in monocular inverse rendering. Starting with simple assumptions on the light transport model – of Lambertian surface reflectance and single light bounce scenario – the thesis expands in various directions by including 3D geometry, multiple light bounces, non-Lambertian isotropic surface reflectance and data-driven reflectance representation to address various facets of this problem. In the first part, the thesis explores the design of fast parallel non-linear GPU optimization schemes for solving both sparse and dense set of equations underlying the inverse rendering problem. In the next part, it applies the current advances in machine learning methods to design novel formulations and loss-energies to give a significant push to the stateof-the-art of reflectance and illumination estimation. Several real-time applications of illumination-aware scene editing, including relighting and material-cloning, are also shown to be made possible for first time by the new models proposed in this thesis. Finally, an outlook for future work on this problem is laid out, with particular emphasis on the interesting new opportunities afforded by the recent advances in machine learning.
Description
Citation
http://dx.doi.org/10.22028/D291-30206
Collections