Live Inverse Rendering

dc.contributor.authorMeka, Abhimitra
dc.date.accessioned2021-01-16T15:44:34Z
dc.date.available2021-01-16T15:44:34Z
dc.date.issued2020-02-03
dc.description.abstractThe field of computer graphics is being transformed by the process of ‘personalization’. The advent of augmented and mixed reality technology is challenging the existing graphics systems, which traditionally required elaborate hardware and skilled artistic efforts. Now, photorealistic graphics are require to be rendered on mobile devices with minimal sensors and compute power, and integrated with the real world environment automatically. Seamlessly integrating graphics into real environments requires the estimation of the fundamental light transport components of a scene - geometry, reflectance and illumination. While estimating environmental geometry and self-localization on mobile devices has progressed rapidly, the task of estimating scene reflectance and illumination from monocular images or videos in real-time (termed live inverse rendering) is still at a nascent stage. The challenge is that of designing efficient representations and models for these appearance parameters and solving the resulting high-dimensional, non-linear and under-constrained system of equations at frame rate. This thesis comprehensively explores, for the first time, various representations, formulations, algorithms and systems for addressing these challenges in monocular inverse rendering. Starting with simple assumptions on the light transport model – of Lambertian surface reflectance and single light bounce scenario – the thesis expands in various directions by including 3D geometry, multiple light bounces, non-Lambertian isotropic surface reflectance and data-driven reflectance representation to address various facets of this problem. In the first part, the thesis explores the design of fast parallel non-linear GPU optimization schemes for solving both sparse and dense set of equations underlying the inverse rendering problem. In the next part, it applies the current advances in machine learning methods to design novel formulations and loss-energies to give a significant push to the stateof-the-art of reflectance and illumination estimation. Several real-time applications of illumination-aware scene editing, including relighting and material-cloning, are also shown to be made possible for first time by the new models proposed in this thesis. Finally, an outlook for future work on this problem is laid out, with particular emphasis on the interesting new opportunities afforded by the recent advances in machine learning.en_US
dc.description.sponsorshipEuropean Research Council Grants CapReal and 4DReplyen_US
dc.identifier.citationhttp://dx.doi.org/10.22028/D291-30206en_US
dc.identifier.otherdoi:10.22028/D291-30206
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/2632997
dc.language.isoenen_US
dc.publisherSciDok - Der Wissenschaftsserver der Universität des Saarlandesen_US
dc.subjectinverse renderingen_US
dc.subjectreflectance estimationen_US
dc.subjectrelightingen_US
dc.subjectreal timeen_US
dc.titleLive Inverse Renderingen_US
dc.typeArticleen_US
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
LiveInverseRendering_AbhimitraMeka.pdf
Size:
100.95 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.79 KB
Format:
Item-specific license agreed upon to submission
Description:
Collections