Huang, DongjinShi, YongshengQu, JiantaoLiu, JinhuaTang, WenWimmer, MichaelAlliez, PierreWestermann, Rüdiger2025-11-072025-11-0720251467-8659https://doi.org/10.1111/cgf.70277https://diglib.eg.org/handle/10.1111/cgf70277We propose Hi3DFace, a novel framework for simultaneous de-occlusion and high-fidelity 3D face reconstruction. To address real-world occlusions, we construct a diverse facial dataset by simulating common obstructions and present TMANet, a transformer-based multi-scale attention network that effectively removes occlusions and restores clean face images. For the 3D face reconstruction stage, we propose a coarse-medium-fine self-supervised scheme. In the coarse reconstruction pipeline, we adopt a face regression network to predict 3DMM coefficients for generating a smooth 3D face. In the medium-scale reconstruction pipeline, we propose a novel depth displacement network, DDFTNet, to remove noise and restore rich details to the smooth 3D geometry. In the fine-scale reconstruction pipeline, we design a GCN (graph convolutional network) refiner to enhance the fidelity of 3D textures. Additionally, a light-aware network (LightNet) is proposed to distil lighting parameters, ensuring illumination consistency between reconstructed 3D faces and input images. Extensive experimental results demonstrate that the proposed Hi3DFace significantly outperforms state-of-the-art reconstruction methods on four public datasets, and five constructed occlusion-type datasets. Hi3DFace achieves robustness and effectiveness in removing occlusions and reconstructing 3D faces from real-world occluded facial images.modellingappearance modellingfacial modellinggeometric modellingComputing methodologies→TexturingReflectance modellingShape modellingHi3DFace: High-Realistic 3D Face Reconstruction From a Single Occluded Image10.1111/cgf.7027714 pages