Xu, JiayiWu, ZhengyangZhang, ChenmingJin, XiaogangJi, YaohuaChristie, MarcPietroni, NicoWang, Yu-Shuen2025-10-072025-10-0720251467-8659https://doi.org/10.1111/cgf.70245https://diglib.eg.org/handle/10.1111/cgf70245Fast and highly realistic multi-view hair transfer plays a crucial role in evaluating the effectiveness of virtual hair try-on systems. However, GAN-based generation and editing methods face persistent challenges in feature disentanglement. Achieving pixel-level, attribute-specific modifications-such as changing hairstyle or hair color without affecting other facial features- remains a long-standing problem. To address this limitation, we propose a novel multi-view hair transfer framework that leverages a hair-only intermediate facial representation and a 3D-guided masking mechanism. Our approach disentangles triplane facial features into spatial geometric components and global style descriptors, enabling independent and precise control over hairstyle and hair color. By introducing a dedicated intermediate representation focused solely on hair and incorporating a two-stage feature fusion strategy guided by the generated 3D mask, our framework achieves fine-grained local editing across multiple viewpoints while preserving facial integrity and improving background consistency. Extensive experiments demonstrate that our method produces visually compelling and natural results in side-to-front view hair transfer tasks, offering a robust and flexible solution for high-fidelity hair reconstruction and manipulation.CCS Concepts: Computing methodologies → Computer graphics; Image manipulation; Image processingComputing methodologies → Computer graphicsImage manipulationImage processingFeature Disentanglement in GANs for Photorealistic Multi-view Hair Transfer10.1111/cgf.7024512 pages