Fan, ChongruiLin, YimingGhosh, AbhijeetChaine, RaphaƫlleDeng, ZhigangKim, Min H.2023-10-092023-10-0920231467-8659https://doi.org/10.1111/cgf.14972https://diglib.eg.org:443/handle/10.1111/cgf14972We present a deep neural network-based method that acquires high-quality shape and spatially varying reflectance of 3D objects using smartphone multi-lens imaging. Our method acquires two images simultaneously using a zoom lens and a wide angle lens of a smartphone under either natural illumination or phone flash conditions, effectively functioning like a single-shot method. Unlike traditional multi-view stereo methods which require sufficient differences in viewpoint and only estimate depth at a certain coarse scale, our method estimates fine-scale depth by utilising an optical-flow field extracted from subtle baseline and perspective due to different optics in the two images captured simultaneously. We further guide the SVBRDF estimation using the estimated depth, resulting in superior results compared to existing single-shot methods.CCS Concepts: Computing methodologies -> Computational photography; Shape inference; Reflectance modelingComputing methodologiesComputational photographyShape inferenceReflectance modelingDeep Shape and SVBRDF Estimation using Smartphone Multi-lens Imaging10.1111/cgf.1497212 pages