Iser, TomášArdelean, Andrei-TimoteiWeyrich, TimMasia, BelenThies, Justus2026-04-212026-04-2120261467-8659https://diglib.eg.org/handle/10.1111/cgf70359https://doi.org/10.1111/cgf.70359Reflectance capture aims at the visual reproduction of an object under varying illumination. Past works differ substantially in their experimental overhead, from single- or few-image approaches that employ significant (often learned) priors at the expense of biased reconstructions, to more accurate approaches that tend to be time-consuming due to the need for carefully controlled illumination. Moreover, as we show, the frequently employed point-light or directional lighting tends to clip highlights and under-sample the reflectance of glossy surfaces, leading to incorrect reconstructions under previously unseen illumination. Our work aims to strike a new balance, combining a low-overhead capture methodology with a fast neural model fit. A key feature of our approach is the use of handheld indirect bounce light that enables convenient capture, limits the dynamic range of the reflectance, effectively avoiding highlight clipping, and ensures contiguous hemispherical incidence even with few images. Our method does not require training on pre-existing material datasets and scales linearly with the number of pixels, making high-resolution capture of spatially varying reflectance distribution functions (SVBRDF) practical even on consumer hardware.CC-BY-4.0Computing methodologies → Reflectance modelingImage-based renderingLearning paradigmsCCS ConceptsHigh-Gloss SVBRDF Capture Using Bounce Light10.1111/cgf.7035915 pages