Chen, Shu-YuQiu, ChunshuoLiu, Feng-LinCao, YanpeiFu, HongboGao, Lin2026-04-202026-04-202026978-3-03868-299-82309-5059https://diglib.eg.org/handle/10.2312/egs20261025https://doi.org/10.2312/egs.202610253D avatar GANs (generative adversarial networks) learn 3D priors from extensive collections of 2D portrait images. However, existing 3D avatar GANs either struggle with real-time performance or lack 3D consistency. To address these issues, we present RETA3D, a novel 3D GAN framework leveraging the efficiency of 3D Gaussian Splatting (3DGS). Our core contribution is a consecutive mesh-binding 3D Gaussian representation that tightly integrates 3D Gaussians with a FLAME mesh template via a novel local coordinate system defined by surface normals and head pose to ensure consistent animation. We also introduce a dynamic texture generation framework that separates static and dynamic texture components, significantly improving reenactment speed. This framework generates a static texture once and efficiently computes dynamic texture updates per-frame using a compact neural network conditioned on FLAME parameters.CC-BY-4.0Animation3D imagingAdversarial learningRETA3D: Real-Time Animatable 3D Gaussian Head Generation10.2312/egs.202610254 pages