Song, ChunjinWu, ZhijieWandt, BastianSigal, LeonidRhodin, HelgeAttene, MarcoSellán, Silvia2025-06-202025-06-2020251467-8659https://doi.org/10.1111/cgf.70192https://diglib.eg.org/handle/10.1111/cgf70192For reconstructing high-fidelity human 3D models from monocular videos, it is crucial to maintain consistent large-scale body shapes along with finely matched subtle wrinkles. This paper explores how per-frame rendering results can be factorized into a pose-independent component and a corresponding pose-dependent counterpart to facilitate frame consistency at multiple scales. Pose adaptive texture features are further improved by restricting the frequency bands of these two components. Pose-independent outputs are expected to be low-frequency, while high-frequency information is linked to pose-dependent factors. We implement this with a dual-branch network. The first branch takes coordinates in the canonical space as input, while the second one additionally considers features outputted by the first branch and pose information of each frame. A final network integrates the information predicted by both branches and utilizes volume rendering to generate photo-realistic 3D human images. Through experiments, we demonstrate that our method consistently surpasses all state-of-the-art methods in preserving high-frequency details and ensuring consistent body contours. Our code is accessible at https://github.com/ChunjinSong/facavatar.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies → Reconstruction; Shape inferenceComputing methodologies → ReconstructionShape inferenceRepresenting Animatable Avatar via Factorized Neural Fields10.1111/cgf.7019213 pages