Einabadi, FarshadGuillemaut, Jean-YvesHilton, AdrianHaines, EricGarces, Elena2024-06-252024-06-252024978-3-03868-262-21727-3463https://doi.org/10.2312/sr.20241159https://diglib.eg.org/handle/10.2312/sr20241159This paper proposes to learn self-shadowing on full-body, clothed human postures from monocular colour image input, by supervising a deep neural model. The proposed approach implicitly learns the articulated body shape in order to generate self-shadow maps without seeking to reconstruct explicitly or estimate parametric 3D body geometry. Furthermore, it is generalisable to different people without per-subject pre-training, and has fast inference timings. The proposed neural model is trained on self-shadow maps rendered from 3D scans of real people for various light directions. Inference of shadow maps for a given illumination is performed from only 2D image input. Quantitative and qualitative experiments demonstrate comparable results to the state of the art whilst being monocular and achieving a considerably faster inference time. We provide ablations of our methodology and further show how the inferred self-shadow maps can benefit monocular full-body human relighting.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies -> Image-based rendering; Visibility; Neural networksCCS ConceptsComputing methodologies> Imagebased renderingVisibilityNeural networksLearning Self-Shadowing for Clothed Human Bodies10.2312/sr.202411597 pages