Lee, GyeongminJang, WonjongLee, SeungyongChristie, MarcHan, Ping-HsuanLin, Shih-SyunPietroni, NicoSchneider, TeseoTsai, Hsin-RueyWang, Yu-ShuenZhang, Eugene2025-10-072025-10-072025978-3-03868-295-0https://doi.org/10.2312/pg.20251290https://diglib.eg.org/handle/10.2312/pg20251290Animation-friendly hair representation is essential for real-time applications such as interactive character systems. While lightweight strip-based models are increasingly adopted as alternatives to strand-based hair for computational efficiency, creating such hair strips based on the hairstyle shown in a single image remains laborious. In this paper, we present a diffusion model-based framework for 3D hair strip generation using sparse strands extracted from a single portrait image. Our key idea is to formulate this task as an inpainting problem solved through a diffusion model operating in the UV parameter space of the head scalp. We parameterize both strands and strips on a shared UV scalp map, enabling the diffusion model to learn their correlations. We then perform spatial and channel-wise inpainting to reconstruct complete strip representations from partially observed strand maps. To train our diffusion model, we address the data scarcity problem of 3D hair strip models by constructing a large-scale strand-strip paired dataset through our adaptive clustering algorithm that converts groups of hair strands into strip models. Comprehensive qualitative and quantitative evaluations demonstrate that our framework effectively reconstructs high-quality hair strip models from an input image while preserving characteristic styles of strips. Furthermore, we show that the generated strips can be directly integrated into rigging-based animation workflows for real-time platforms such as games.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies → Parametric curve and surface models; ReconstructionComputing methodologies → Parametric curve and surface modelsReconstructionGenerating 3D Hair Strips from Partial Strands using Diffusion Model10.2312/pg.2025129012 pages