Zhu, PengfeiLai, ShuichangChen, MufanGuo, JieLiu, YifanGuo, YanwenChaine, RaphaƫlleDeng, ZhigangKim, Min H.2023-10-092023-10-0920231467-8659https://doi.org/10.1111/cgf.14973https://diglib.eg.org:443/handle/10.1111/cgf14973The problem of reconstructing spatially-varying BRDFs from RGB images has been studied for decades. Researchers found themselves in a dilemma: opting for either higher quality with the inconvenience of camera and light calibration, or greater convenience at the expense of compromised quality without complex setups. We address this challenge by introducing a twobranch network to learn the lighting effects in images. The two branches, referred to as Light-known and Light-aware, diverge in their need for light information. The Light-aware branch is guided by the Light-known branch to acquire the knowledge of discerning light effects and surface reflectance properties, but without the reliance of light positions. Both branches are trained using the synthetic dataset, but during testing on real-world cases without calibration, only the Light-aware branch is activated. To facilitate a more effective utilization of various light conditions, we employ gated recurrent units (GRUs) to fuse the features extracted from different images. The two modules mutually benefit when multiple inputs are provided. We present our reconstructed results on both synthetic and real-world examples, demonstrating high quality while maintaining a lightweight characteristic in comparison to previous methods.CCS Concepts: Computing methodologies -> Reflectance modelingComputing methodologiesReflectance modelingSVBRDF Reconstruction by Transferring Lighting Knowledge10.1111/cgf.1497311 pages