Show simple item record

dc.contributor.authorRempe, Davisen_US
dc.contributor.authorGuibas, Leonidas J.en_US
dc.contributor.authorHertzmann, Aaronen_US
dc.contributor.authorRussell, Bryanen_US
dc.contributor.authorVillegas, Rubenen_US
dc.contributor.authorYang, Jimeien_US
dc.contributor.editorHolden, Danielen_US
dc.date.accessioned2020-10-04T14:46:25Z
dc.date.available2020-10-04T14:46:25Z
dc.date.issued2020
dc.identifier.isbn978-3-03868-129-8
dc.identifier.issn1727-5288
dc.identifier.urihttps://doi.org/10.2312/sca.20201218
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/sca20201218
dc.description.abstractExisting methods for human motion from video predict 2D and 3D poses that are approximately accurate, but contain visible errors that violate physical constraints, such as feet penetrating the ground and bodies leaning at extreme angles. We present a physics-based method for inferring 3D human motion from video sequences that takes initial 2D and 3D pose estimates as input. We first estimate ground contact timings with a neural network which is trained without hand-labeled data. A physicsbased trajectory optimization then solves for a physically-plausible motion, based on the inputs. We show this process produces motions that are more realistic than those from purely kinematic methods for character animation from dynamic videos. A detailed report that fully describes our method is available at geometry.stanford.edu/projects/human-dynamics-eccv-2020.en_US
dc.publisherThe Eurographics Associationen_US
dc.subjectComputing methodologies
dc.subjectComputer vision problems
dc.subjectMotion capture
dc.titleContact and Human Dynamics from Monocular Videoen_US
dc.description.seriesinformationEurographics/ ACM SIGGRAPH Symposium on Computer Animation - Showcases
dc.description.sectionheadersShowcases
dc.identifier.doi10.2312/sca.20201218
dc.identifier.pages3-5


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record