Show simple item record

dc.contributor.authorAlldieck, Thiemo
dc.date.accessioned2020-11-03T19:56:29Z
dc.date.available2020-11-03T19:56:29Z
dc.date.issued2020
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/2632950
dc.description.abstractModeling 3D virtual humans has been an active field of research over the last decades. It plays a fundamental role in many applications, such as movie production, sports and medical sciences, or human-computer interaction. Early works focus on artist-driven modeling or utilize expensive scanning equipment. In contrast, our goal is the fully automatic acquisition of personalized avatars using low-cost monocular video cameras only. In this dissertation, we show fundamental advances in 3D human reconstruction from monocular images. We solve this challenging task by developing methods that effectively fuse information from multiple points in time and realistically complete reconstructions from sparse observations. Given a video or only a single photograph of a person in motion, we reconstruct, for the first time, not only his or her 3D pose but the full 3D shape including the face, hair, and clothing. In this dissertation, we explore various approaches to monocular image and video-based 3D human reconstruction. We demonstrate both straight-forward and sophisticated reconstruction methods focused on accuracy, simplicity, usability, and visual fidelity. During extensive evaluations, we give insights into important parameters, reconstruction quality, and the robustness of the methods. For the first time, our methods enable camera-based, easy-to-use self-digitization for exciting new applications like, for example, telepresence or virtual try-on for online fashion shopping.en_US
dc.language.isoen_USen_US
dc.subject3D reconstructionen_US
dc.subjecthuman reconstructionen_US
dc.titleReconstructing 3D Human Avatars from Monocular Imagesen_US
dc.typeThesisen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record