Li, GuannanWu, ChengleiStoll, CarstenLiu, YebinVaranasi, KiranDai, QionghaiTheobalt, ChristianI. Navazo, P. Poulin2015-02-282015-02-2820131467-8659https://doi.org/10.1111/cgf.12047We present a novel approach to create relightable free-viewpoint human performances from multi-view video recorded under general uncontrolled and uncalibated illumination.We first capture a multi-view sequence of an actor wearing arbitrary apparel and reconstruct a spatio-temporal coherent coarse 3D model of the performance using a marker-less tracking approach. Using these coarse reconstructions, we estimate the low-frequency component of the illumination in a spherical harmonics (SH) basis as well as the diffuse reflectance, and then utilize them to estimate the dynamic geometry detail of human actors based on shading cues. Given the high-quality time-varying geometry, the estimated illumination is extended to the all-frequency domain by re-estimating it in the wavelet basis. Finally, the high-quality all-frequency illumination is utilized to reconstruct the spatially-varying BRDF of the surface. The recovered time-varying surface geometry and spatially-varying non-Lambertian reflectance allow us to generate high-quality model-based free view-point videos of the actor under novel illumination conditions. Our method enables plausible reconstruction of relightable dynamic scene models without a complex controlled lighting apparatus, and opens up a path towards relightable performance capture in less constrained environments and using less complex acquisition setups.I.3.7 [Computer Graphics]Three Dimensional Graphics and RealismColorshadingshadowingand textureCapturing Relightable Human Performances under General Uncontrolled Illumination