Tajima, DaichiKanamori, YoshihiroEndo, YukiZhang, Fang-Lue and Eisemann, Elmar and Singh, Karan2021-10-142021-10-1420211467-8659https://doi.org/10.1111/cgf.14414https://diglib.eg.org:443/handle/10.1111/cgf14414The modern supervised approaches for human image relighting rely on training data generated from 3D human models. However, such datasets are often small (e.g., Light Stage data with a small number of individuals) or limited to diffuse materials (e.g., commercial 3D scanned human models). Thus, the human relighting techniques suffer from the poor generalization capability and synthetic-to-real domain gap. In this paper, we propose a two-stage method for single-image human relighting with domain adaptation. In the first stage, we train a neural network for diffuse-only relighting. In the second stage, we train another network for enhancing non-diffuse reflection by learning residuals between real photos and images reconstructed by the diffuse-only network. Thanks to the second stage, we can achieve higher generalization capability against various cloth textures, while reducing the domain gap. Furthermore, to handle input videos, we integrate illumination-aware deep video prior to greatly reduce flickering artifacts even with challenging settings under dynamic illuminations.Computing methodologiesImage manipulationNeural networksRelighting Humans in the Wild: Monocular Full-Body Human Relighting with Domain Adaptation10.1111/cgf.14414205-216