Cao, YongFaloutsos, PetrosPighin, FrédéricD. Breen and M. Lin2014-01-292014-01-2920031-58113-659-51727-5288https://doi.org/10.2312/SCA03/225-231We present a new method for editing speech related facial motions. Our method uses an unsupervised learning technique, Independent Component Analysis (ICA), to extract a set of meaningful parameters without any annotation of the data. With ICA, we are able to solve a blind source separation problem and describe the original data as a linear combination of two sources. One source captures content (speech) and the other captures style (emotion). By manipulating the independent components we can edit the motions in intuitive ways.Unsupervised Learning for Speech Motion Editing