Nimier-David, MerlinDong, ZhaoJakob, WenzelKaplanyan, AntonBousseau, Adrien and McGuire, Morgan2021-07-122021-07-122021978-3-03868-157-11727-3463https://doi.org/10.2312/sr.20211292https://diglib.eg.org:443/handle/10.2312/sr20211292Modern geometric reconstruction techniques achieve impressive levels of accuracy in indoor environments. However, such captured data typically keeps lighting and materials entangled. It is then impossible to manipulate the resulting scenes in photorealistic settings, such as augmented / mixed reality and robotics simulation. Moreover, various imperfections in the captured data, such as missing detailed geometry, camera misalignment, uneven coverage of observations, etc., pose challenges for scene recovery. To address these challenges, we present a robust optimization pipeline based on differentiable rendering to recover physically based materials and illumination, leveraging RGB and geometry captures. We introduce a novel texture-space sampling technique and carefully chosen inductive priors to help guide reconstruction, avoiding low-quality or implausible local minima. Our approach enables robust and high-resolution reconstruction of complex materials and illumination in captured indoor scenes. This enables a variety of applications including novel view synthesis, scene editing, local & global relighting, synthetic data augmentation, and other photorealistic manipulations.Computing methodologies --> ReconstructionMixed / augmented realityVirtual realityRay tracingMaterial and Lighting Reconstruction for Complex Indoor Scenes with Texture-space Differentiable Rendering10.2312/sr.2021129273-84