HDHumans: A Hybrid Approach for High-fidelity Digital Humans

dc.contributor.authorHabermann, Marcen_US
dc.contributor.authorLiu, Lingjieen_US
dc.contributor.authorXu, Weipengen_US
dc.contributor.authorPons-Moll, Gerarden_US
dc.contributor.authorZollhoefer, Michaelen_US
dc.contributor.authorTheobalt, Christianen_US
dc.contributor.editorWang, Huaminen_US
dc.contributor.editorYe, Yutingen_US
dc.contributor.editorVictor Zordanen_US
dc.date.accessioned2023-10-16T12:32:59Z
dc.date.available2023-10-16T12:32:59Z
dc.date.issued2023
dc.description.abstractPhoto-real digital human avatars are of enormous importance in graphics, as they enable immersive communication over the globe, improve gaming and entertainment experiences, and can be particularly beneficial for AR and VR settings. However, current avatar generation approaches either fall short in high-fidelity novel view synthesis, generalization to novel motions, reproduction of loose clothing, or they cannot render characters at the high resolution offered by modern displays. To this end, we propose HDHumans, which is the first method for HD human character synthesis that jointly produces an accurate and temporally coherent 3D deforming surface and highly photo-realistic images of arbitrary novel views and of motions not seen at training time. At the technical core, our method tightly integrates a classical deforming character template with neural radiance fields (NeRF). Our method is carefully designed to achieve a synergy between classical surface deformation and a NeRF. First, the template guides the NeRF, which allows synthesizing novel views of a highly dynamic and articulated character and even enables the synthesis of novel motions. Second, we also leverage the dense pointclouds resulting from the NeRF to further improve the deforming surface via 3D-to-3D supervision. We outperform the state of the art quantitatively and qualitatively in terms of synthesis quality and resolution, as well as the quality of 3D surface reconstruction.en_US
dc.description.number3
dc.description.sectionheadersCharacter Synthesis
dc.description.seriesinformationProceedings of the ACM on Computer Graphics and Interactive Techniques
dc.description.volume6
dc.identifier.doi10.1145/3606927
dc.identifier.issn2577-6193
dc.identifier.urihttps://doi.org/10.1145/3606927
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1145/3606927
dc.publisherACM Association for Computing Machineryen_US
dc.subjectCCS Concepts: Computing methodologies -> Computer vision; Rendering human synthesis, neural synthesis, human modeling, human performance capture"
dc.subjectComputing methodologies
dc.subjectComputer vision
dc.subjectRendering human synthesis
dc.subjectneural synthesis
dc.subjecthuman modeling
dc.subjecthuman performance capture"
dc.titleHDHumans: A Hybrid Approach for High-fidelity Digital Humansen_US
Files