Jiang, DiqiongYou, LihuaChang, JianTong, RuofengYang, YinParakkat, Amal D.Deng, BailinNoh, Seung-Tak2022-10-042022-10-042022978-3-03868-190-8https://doi.org/10.2312/pg.20221249https://diglib.eg.org:443/handle/10.2312/pg20221249High-quality and personalized digital human faces have been widely used in media and entertainment, from film and game production to virtual reality. However, the existing technology of generating digital faces requires extremely intensive labor, which prevents the large-scale popularization of digital face technology. In order to tackle this problem, the proposed research will investigate deep learning-based facial modeling and animation technologies to 1) create personalized face geometry from a single image, including the recognizable neutral face shape and believable personalized blendshapes; (2) generate personalized production-level facial skin textures from a video or image sequence; (3) automatically drive and animate a 3D target avatar by an actor's 2D facial video or audio. Our innovation is to achieve these tasks both efficiently and precisely by using the end-to-end framework with modern deep learning technology (StyleGAN, Transformer, NeRF).Attribution 4.0 International LicenseDFGA: Digital Human Faces Generation and Animation from the RGB Video using Modern Deep Learning Technology10.2312/pg.2022124963-642 pages