Puppet Dubbing

Loading...
Thumbnail Image
Date
2019
Journal Title
Journal ISSN
Volume Title
Publisher
The Eurographics Association
Abstract
Dubbing puppet videos to make the characters (e.g. Kermit the Frog) convincingly speak a new speech track is a popular activity with many examples of well-known puppets speaking lines from films or singing rap songs. But manually aligning puppet mouth movements to match a new speech track is tedious as each syllable of the speech must match a closed-open-closed segment of mouth movement for the dub to be convincing. In this work, we present two methods to align a new speech track with puppet video, one semi-automatic appearance-based and the other fully-automatic audio-based. The methods offer complementary advantages and disadvantages. Our appearance-based approach directly identifies closed-open-closed segments in the puppet video and is robust to low-quality audio as well as misalignments between the mouth movements and speech in the original performance, but requires some manual annotation. Our audio-based approach assumes the original performance matches a closed-open-closed mouth segment to each syllable of the original speech. It is fully automatic, robust to visual occlusions and fast puppet movements, but does not handle misalignments in the original performance. We compare the methods and show that both improve the credibility of the resulting video over simple baseline techniques, via quantitative evaluation and user ratings.
Description

        
@inproceedings{
10.2312:sr.20191220
, booktitle = {
Eurographics Symposium on Rendering - DL-only and Industry Track
}, editor = {
Boubekeur, Tamy and Sen, Pradeep
}, title = {{
Puppet Dubbing
}}, author = {
Fried, Ohad
 and
Agrawala, Maneesh
}, year = {
2019
}, publisher = {
The Eurographics Association
}, ISSN = {
1727-3463
}, ISBN = {
978-3-03868-095-6
}, DOI = {
10.2312/sr.20191220
} }
Citation