Cheng, DachuanShi, JianChen, YanyunDeng, XiaomingZhang, XiaopengFu, Hongbo and Ghosh, Abhijeet and Kopf, Johannes2018-10-072018-10-0720181467-8659https://doi.org/10.1111/cgf.13561https://diglib.eg.org:443/handle/10.1111/cgf13561Illumination estimation is an essential problem in computer vision, graphics and augmented reality. In this paper, we propose a learning based method to recover low-frequency scene illumination represented as spherical harmonic (SH) functions by pairwise photos from rear and front cameras on mobile devices. An end-to-end deep convolutional neural network (CNN) structure is designed to process images on symmetric views and predict SH coefficients. We introduce a novel Render Loss to improve the rendering quality of the predicted illumination. A high quality high dynamic range (HDR) panoramic image dataset was developed for training and evaluation. Experiments show that our model produces visually and quantitatively superior results compared to the state-of-the-arts. Moreover, our method is practical for mobile-based applications.Humancentered computingMixed / augmented realityComputing methodologiesScene understandingRenderingLearning Scene Illumination by Pairwise Photos from Rear and Front Mobile Cameras10.1111/cgf.13561213-221