Ye, WenjieDong, YuePeers, PieterGuo, BainingBenes, Bedrich and Hauser, Helwig2021-10-082021-10-0820211467-8659https://doi.org/10.1111/cgf.14387https://diglib.eg.org:443/handle/10.1111/cgf14387In this paper we present a novel method for recovering high‐resolution spatially‐varying isotropic surface reflectance of a planar exemplar from a flash‐lit close‐up video sequence captured with a regular hand‐held mobile phone. We do not require careful calibration of the camera and lighting parameters, but instead compute a per‐pixel flow map using a deep neural network to align the input video frames. For each video frame, we also extract the reflectance parameters, and warp the neural reflectance features directly using the per‐pixel flow, and subsequently pool the warped features. Our method facilitates convenient hand‐held acquisition of spatially‐varying surface reflectance with commodity hardware by non‐expert users. Furthermore, our method enables aggregation of reflectance features from surface points visible in only a subset of the captured video frames, enabling the creation of high‐resolution reflectance maps that exceed the native camera resolution. We demonstrate and validate our method on a variety of synthetic and real‐world spatially‐varying materials.SVBRDFhand‐held captureautomatic alignmentDeep Reflectance Scanning: Recovering Spatially‐varying Material Appearance from a Flash‐lit Video Sequence10.1111/cgf.14387409-427