Learning Self-Shadowing for Clothed Human Bodies

No Thumbnail Available
Date
2024
Journal Title
Journal ISSN
Volume Title
Publisher
The Eurographics Association
Abstract
This paper proposes to learn self-shadowing on full-body, clothed human postures from monocular colour image input, by supervising a deep neural model. The proposed approach implicitly learns the articulated body shape in order to generate self-shadow maps without seeking to reconstruct explicitly or estimate parametric 3D body geometry. Furthermore, it is generalisable to different people without per-subject pre-training, and has fast inference timings. The proposed neural model is trained on self-shadow maps rendered from 3D scans of real people for various light directions. Inference of shadow maps for a given illumination is performed from only 2D image input. Quantitative and qualitative experiments demonstrate comparable results to the state of the art whilst being monocular and achieving a considerably faster inference time. We provide ablations of our methodology and further show how the inferred self-shadow maps can benefit monocular full-body human relighting.
Description

CCS Concepts: Computing methodologies -> Image-based rendering; Visibility; Neural networks

        
@inproceedings{
10.2312:sr.20241159
, booktitle = {
Eurographics Symposium on Rendering
}, editor = {
Haines, Eric
and
Garces, Elena
}, title = {{
Learning Self-Shadowing for Clothed Human Bodies
}}, author = {
Einabadi, Farshad
and
Guillemaut, Jean-Yves
and
Hilton, Adrian
}, year = {
2024
}, publisher = {
The Eurographics Association
}, ISSN = {
1727-3463
}, ISBN = {
978-3-03868-262-2
}, DOI = {
10.2312/sr.20241159
} }
Citation