Show simple item record

dc.contributor.authorKim, Hyeongwoo
dc.date.accessioned2021-01-11T12:55:45Z
dc.date.available2021-01-11T12:55:45Z
dc.date.issued2020-09-29
dc.identifier.citation@doctoralThesis{Kim_2019, title={Learning-based face reconstruction and editing}, author={Kim, Hyeongwoo}, doi={http://dx.doi.org/10.22028/D291-32394}, year={2019} }en_US
dc.identifier.otherhttp://dx.doi.org/10.22028/D291-32394
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/2632995
dc.descriptionPhD dissertationen_US
dc.description.abstractPhoto-realistic face editing – an important basis for a wide range of applications in movie and game productions, and applications for mobile devices – is based on computationally expensive algorithms that often require many tedious time-consuming manual steps. This thesis advances state-of-the-art face performance capture and editing pipelines by proposing machine learning-based algorithms for high-quality inverse face rendering in real time and highly realistic neural face rendering, and a video-based refocusing method for faces and general videos. In particular, the proposed contributions address fundamental open challenges towards real-time and highly realistic face editing. The first contribution addresses face reconstruction and introduces a deep convolutional inverse rendering framework that jointly estimates all facial rendering parameters from a single image in real time. The proposed method is based on a novel boosting process that iteratively updates the synthetic training data to better reflect the distribution of real-world images. Second, the thesis introduces a method for face video editing at previously unseen quality. It is based on a generative neural network with a novel space-time architecture, which enables photo-realistic re-animation of portrait videos using an input video. It is the first method to transfer the full 3D head position, head rotation, face expression, eye gaze and eye blinking from a source actor to a portrait video of a target actor. Third, the thesis contributes a new refocusing approach for faces and general videos in postprocessing. The proposed algorithm is based on a new depth-from-defocus algorithm that computes space-time-coherent depth maps, deblurred all-in-focus video and the focus distance for each frame. The high-quality results shown with various applications and challenging scenarios demonstrate the contributions presented in the thesis, and also show potential for machine learning-driven algorithms to solve various open problems in computer graphics.en_US
dc.description.sponsorshipMax Planck Institute for Informaticsen_US
dc.language.isoenen_US
dc.publisherMax Planck Institute for Informaticsen_US
dc.relation.ispartofseriesPhD dissertation;
dc.subjectFace manipulationen_US
dc.titleLearning-based face reconstruction and editingen_US
dc.typeThesisen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record