Generating 3D Hair Strips from Partial Strands using Diffusion Model
| dc.contributor.author | Lee, Gyeongmin | en_US |
| dc.contributor.author | Jang, Wonjong | en_US |
| dc.contributor.author | Lee, Seungyong | en_US |
| dc.contributor.editor | Christie, Marc | en_US |
| dc.contributor.editor | Han, Ping-Hsuan | en_US |
| dc.contributor.editor | Lin, Shih-Syun | en_US |
| dc.contributor.editor | Pietroni, Nico | en_US |
| dc.contributor.editor | Schneider, Teseo | en_US |
| dc.contributor.editor | Tsai, Hsin-Ruey | en_US |
| dc.contributor.editor | Wang, Yu-Shuen | en_US |
| dc.contributor.editor | Zhang, Eugene | en_US |
| dc.date.accessioned | 2025-10-07T06:04:19Z | |
| dc.date.available | 2025-10-07T06:04:19Z | |
| dc.date.issued | 2025 | |
| dc.description.abstract | Animation-friendly hair representation is essential for real-time applications such as interactive character systems. While lightweight strip-based models are increasingly adopted as alternatives to strand-based hair for computational efficiency, creating such hair strips based on the hairstyle shown in a single image remains laborious. In this paper, we present a diffusion model-based framework for 3D hair strip generation using sparse strands extracted from a single portrait image. Our key idea is to formulate this task as an inpainting problem solved through a diffusion model operating in the UV parameter space of the head scalp. We parameterize both strands and strips on a shared UV scalp map, enabling the diffusion model to learn their correlations. We then perform spatial and channel-wise inpainting to reconstruct complete strip representations from partially observed strand maps. To train our diffusion model, we address the data scarcity problem of 3D hair strip models by constructing a large-scale strand-strip paired dataset through our adaptive clustering algorithm that converts groups of hair strands into strip models. Comprehensive qualitative and quantitative evaluations demonstrate that our framework effectively reconstructs high-quality hair strip models from an input image while preserving characteristic styles of strips. Furthermore, we show that the generated strips can be directly integrated into rigging-based animation workflows for real-time platforms such as games. | en_US |
| dc.description.sectionheaders | 3D Reconstruction | |
| dc.description.seriesinformation | Pacific Graphics Conference Papers, Posters, and Demos | |
| dc.identifier.doi | 10.2312/pg.20251290 | |
| dc.identifier.isbn | 978-3-03868-295-0 | |
| dc.identifier.pages | 12 pages | |
| dc.identifier.uri | https://doi.org/10.2312/pg.20251290 | |
| dc.identifier.uri | https://diglib.eg.org/handle/10.2312/pg20251290 | |
| dc.publisher | The Eurographics Association | en_US |
| dc.rights | Attribution 4.0 International License | |
| dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
| dc.subject | CCS Concepts: Computing methodologies → Parametric curve and surface models; Reconstruction | |
| dc.subject | Computing methodologies → Parametric curve and surface models | |
| dc.subject | Reconstruction | |
| dc.title | Generating 3D Hair Strips from Partial Strands using Diffusion Model | en_US |