Learning Reduced-Order Feedback Policies for Motion Skills

dc.contributor.authorDing, Kaien_US
dc.contributor.authorLiu, Libinen_US
dc.contributor.authorPanne, Michiel van deen_US
dc.contributor.authorYin, KangKangen_US
dc.contributor.editorFlorence Bertails-Descoubes and Stelian Coros and Shinjiro Suedaen_US
dc.date.accessioned2016-01-19T09:01:15Z
dc.date.available2016-01-19T09:01:15Z
dc.date.issued2015en_US
dc.description.abstractWe introduce a method for learning low-dimensional linear feedback strategies for the control of physics-based animated characters around a given reference trajectory. This allows for learned low-dimensional state abstractions and action abstractions, thereby reducing the need to rely on manually designed abstractions such as the center-of-mass state or foot-placement actions. Once learned, the compact feedback structure allow simulated characters to respond to changes in the environment and changes in goals. The approach is based on policy search in the space of reduced-order linear output feedback matrices. We show that these can be used to replace or further reduce manually-designed state and action abstractions. The approach is sufficiently general to allow for the development of unconventional feedback loops, such as feedback based on ground reaction forces. Results are demonstrated for a mix of 2D and 3D systems, including tilting-platform balancing, walking, running, rolling, targeted kicks, and several types of ballhitting tasks.en_US
dc.description.sectionheadersCharacters & Controlen_US
dc.description.seriesinformationACM/ Eurographics Symposium on Computer Animationen_US
dc.identifier.doi10.1145/2786784.2786802en_US
dc.identifier.isbn978-1-4503-3496-9en_US
dc.identifier.pages83-92en_US
dc.identifier.urihttps://doi.org/10.1145/2786784.2786802en_US
dc.publisherACM Siggraphen_US
dc.subjecthuman simulationen_US
dc.subjectcontrolen_US
dc.subjectcharacter animationen_US
dc.titleLearning Reduced-Order Feedback Policies for Motion Skillsen_US
Files