Xu, YingyanRiviere, JérémyZoss, GaspardChandran, PrashanthBradley, DerekGotardo, PauloPelechano, NuriaVanderhaeghe, David2022-04-222022-04-222022978-3-03868-169-41017-4656https://doi.org/10.2312/egs.20221019https://diglib.eg.org:443/handle/10.2312/egs20221019Facial appearance capture techniques estimate geometry and reflectance properties of facial skin by performing a computationally intensive inverse rendering optimization in which one or more images are re-rendered a large number of times and compared to real images coming from multiple cameras. Due to the high computational burden, these techniques often make several simplifying assumptions to tame complexity and make the problem more tractable. For example, it is common to assume that the scene consists of only distant light sources, and ignore indirect bounces of light (on the surface and within the surface). Also, methods based on polarized lighting often simplify the light interaction with the surface and assume perfect separation of diffuse and specular reflectance. In this paper, we move in the opposite direction and demonstrate the impact on facial appearance capture quality when departing from these idealized conditions towards models that seek to more accurately represent the lighting, while at the same time minimally increasing computational burden. We compare the results obtained with a state-of-the-art appearance capture method [RGB*20], with and without our proposed improvements to the lighting model.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies --> Reflectance modeling; Reconstruction; Appearance and texture representations; 3D imagingComputing methodologiesReflectance modelingReconstructionAppearance and texture representations3D imagingImproved Lighting Models for Facial Appearance Capture10.2312/egs.202210195-84 pages