Show simple item record

dc.contributor.authorFerstl, Ylva
dc.date.accessioned2022-01-03T07:01:45Z
dc.date.available2022-01-03T07:01:45Z
dc.date.issued2021-08-03
dc.identifier.citationFerstl, Ylva, Machine Learning For Plausible Gesture Generation From Speech For Virtual Humans, Trinity College Dublin.School of Computer Science & Statistics, 2021en_US
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/2633145
dc.description.abstractThe growing use of virtual humans in an array of applications such as games, human-computer interfaces, and virtual reality demands the design of appealing and engaging characters, while minimizing the cost and time of creation. Nonverbal behavior is an integral part of human communication and important for believable embodied virtual agents. Co-speech gesture represents a key aspect of nonverbal communication and virtual agents are more engaging when exhibiting gesture behavior. Hand-animation of gesture is costly and does not scale to applications where agents may produce new utterances after deployment. Automatized gesture generation is therefore attractive, enabling any new utterance to be animated on the go. A major body of research has been dedicated to methods of automatic gesture generation, but generating expressive and defined gesture motion has commonly relied on explicit formulation of if-then rules or probabilistic modelling of annotated features. Able to work on unlabelled data, machine learning approaches are catching up, however, they often still produce averaged motion failing to capture the speech-gesture relationship adequately. The results from machine-learned models point to the high complexity of the speech-to-motion learning task. In this work, we explore a number of machine learning methods for improving the speech-to-motion learning outcome, including the use of transfer learning from speech and motion models, adversarial training, as well as modelling explicit expressive gesture parameters from speech. We develop a method for automatically segmenting individual gestures from a motion stream, enabling detailed analysis of the speech-gesture relationship. We present two large multimodal datasets of conversational speech and motion, designed specifically for this modelling problem. We finally present and evaluate a novel speech-to-gesture system, merging methods of machine learning and database sampling.en_US
dc.description.sponsorshipScience Foundation Ireland (SFI)en_US
dc.language.isoenen_US
dc.publisherTrinity College Dublin, The University of Dublinen_US
dc.subjectgesture generationen_US
dc.subjectcomputer animationen_US
dc.subjectmotion modellingen_US
dc.subjectmachine learningen_US
dc.subjectconversational agentsen_US
dc.subjectco-speech gestureen_US
dc.titleMachine Learning For Plausible Gesture Generation From Speech For Virtual Humansen_US
dc.typeAnimationen_US
dc.typeThesisen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record