Show simple item record

dc.contributor.authorYoshiyasu, Yusukeen_US
dc.contributor.authorGamez, Lucasen_US
dc.contributor.editorWilkie, Alexander and Banterle, Francescoen_US
dc.date.accessioned2020-05-24T13:42:31Z
dc.date.available2020-05-24T13:42:31Z
dc.date.issued2020
dc.identifier.isbn978-3-03868-101-4
dc.identifier.issn1017-4656
dc.identifier.urihttps://doi.org/10.2312/egs.20201012
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/egs20201012
dc.description.abstractIn this paper, we address the problem of learning 3D human pose and body shape from 2D image dataset, without having to use 3D supervisions (body shape and pose) which are in practice difficult to obtain. The idea is to use dense correspondences between image points and a body surface, which can be annotated on in-the-wild 2D images, to extract, aggregate and learn 3D information such as body shape and pose from them. To do so, we propose a training strategy called "deform-and-learn" where we alternate deformable surface registration and training of deep convolutional neural networks (ConvNets). Experimental results showed that our method is comparable to previous semi-supervised techniques that use 3D supervision.en_US
dc.publisherThe Eurographics Associationen_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/]
dc.titleLearning Body Shape and Pose from Dense Correspondencesen_US
dc.description.seriesinformationEurographics 2020 - Short Papers
dc.description.sectionheadersModelling - Shape
dc.identifier.doi10.2312/egs.20201012
dc.identifier.pages37-40


Files in this item

Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution 4.0 International License
Except where otherwise noted, this item's license is described as Attribution 4.0 International License