Siersleben, DanielOrtiz-Cayon, RodrigoIstenic, KlemenTomoto, YusukeSchaub-Meyer, SimoneHardeberg, Jon YngveRushmeier, Holly2024-08-282024-08-282024978-3-03868-264-62309-5059https://doi.org/10.2312/mam.20241181https://diglib.eg.org/handle/10.2312/mam20241181High-quality geometry reconstruction from multi-view images with subsequent appearance decomposition into the physical shading components could enable a seamless integration of neural reconstructions into the modern rendering workflow. While 3D reconstruction techniques have steadily improved, the task of inverse rendering by decomposing an appearance into lighting effects and material properties remains fundamentally ill-posed and highly ambiguous. We show that current state-of-the-art inverse rendering approaches fail to accurately recover material properties, significantly impacting relighting quality. Furthermore, we demonstrate that existing evaluation methods, which rely on image-based metrics, do not adequately capture the reconstruction quality in novel lighting conditions. Our findings illustrate the dependence of current systems on simplified setups with predefined illumination, which are necessary to reliably disentangle light and material contributions and to ultimately achieve convincing relighting.Attribution 4.0 International LicenseThe Challenges of Relighting from Multi-View Observations10.2312/mam.202411814 pages