Saunders, Jack R.Namboodiri, Vinay P.Hu, RuizhenCharalambous, Panayiotis2024-04-302024-04-302024978-3-03868-237-01017-4656https://doi.org/10.2312/egs.20241017https://diglib.eg.org/handle/10.2312/egs20241017The ability to accurately capture and express emotions is a critical aspect of creating believable characters in video games and other forms of entertainment. Traditionally, this animation has been achieved with artistic effort or performance capture, both requiring costs in time and labor. More recently, audio-driven models have seen success, however, these often lack expressiveness in areas not correlated to the audio signal. In this paper, we present a novel approach to facial animation by taking existing animations and allowing for the modification of style characteristics. We maintain the lip-sync of the animations with this method thanks to the use of a novel viseme-preserving loss. We perform quantitative and qualitative experiments to demonstrate the effectiveness of our work.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies → Animation; Machine learningComputing methodologies → AnimationMachine learningFACTS: Facial Animation Creation using the Transfer of Styles10.2312/egs.202410174 pages