Unsupervised Learning for Speech Motion Editing

dc.contributor.authorCao, Yongen_US
dc.contributor.authorFaloutsos, Petrosen_US
dc.contributor.authorPighin, Frédéricen_US
dc.contributor.editorD. Breen and M. Linen_US
dc.date.accessioned2014-01-29T06:32:26Z
dc.date.available2014-01-29T06:32:26Z
dc.date.issued2003en_US
dc.description.abstractWe present a new method for editing speech related facial motions. Our method uses an unsupervised learning technique, Independent Component Analysis (ICA), to extract a set of meaningful parameters without any annotation of the data. With ICA, we are able to solve a blind source separation problem and describe the original data as a linear combination of two sources. One source captures content (speech) and the other captures style (emotion). By manipulating the independent components we can edit the motions in intuitive ways.en_US
dc.description.seriesinformationSymposium on Computer Animationen_US
dc.identifier.isbn1-58113-659-5en_US
dc.identifier.issn1727-5288en_US
dc.identifier.urihttps://doi.org/10.2312/SCA03/225-231en_US
dc.publisherThe Eurographics Associationen_US
dc.titleUnsupervised Learning for Speech Motion Editingen_US
Files