Generating 3D Hair Strips from Partial Strands using Diffusion Model

dc.contributor.authorLee, Gyeongminen_US
dc.contributor.authorJang, Wonjongen_US
dc.contributor.authorLee, Seungyongen_US
dc.contributor.editorChristie, Marcen_US
dc.contributor.editorHan, Ping-Hsuanen_US
dc.contributor.editorLin, Shih-Syunen_US
dc.contributor.editorPietroni, Nicoen_US
dc.contributor.editorSchneider, Teseoen_US
dc.contributor.editorTsai, Hsin-Rueyen_US
dc.contributor.editorWang, Yu-Shuenen_US
dc.contributor.editorZhang, Eugeneen_US
dc.date.accessioned2025-10-07T06:04:19Z
dc.date.available2025-10-07T06:04:19Z
dc.date.issued2025
dc.description.abstractAnimation-friendly hair representation is essential for real-time applications such as interactive character systems. While lightweight strip-based models are increasingly adopted as alternatives to strand-based hair for computational efficiency, creating such hair strips based on the hairstyle shown in a single image remains laborious. In this paper, we present a diffusion model-based framework for 3D hair strip generation using sparse strands extracted from a single portrait image. Our key idea is to formulate this task as an inpainting problem solved through a diffusion model operating in the UV parameter space of the head scalp. We parameterize both strands and strips on a shared UV scalp map, enabling the diffusion model to learn their correlations. We then perform spatial and channel-wise inpainting to reconstruct complete strip representations from partially observed strand maps. To train our diffusion model, we address the data scarcity problem of 3D hair strip models by constructing a large-scale strand-strip paired dataset through our adaptive clustering algorithm that converts groups of hair strands into strip models. Comprehensive qualitative and quantitative evaluations demonstrate that our framework effectively reconstructs high-quality hair strip models from an input image while preserving characteristic styles of strips. Furthermore, we show that the generated strips can be directly integrated into rigging-based animation workflows for real-time platforms such as games.en_US
dc.description.sectionheaders3D Reconstruction
dc.description.seriesinformationPacific Graphics Conference Papers, Posters, and Demos
dc.identifier.doi10.2312/pg.20251290
dc.identifier.isbn978-3-03868-295-0
dc.identifier.pages12 pages
dc.identifier.urihttps://doi.org/10.2312/pg.20251290
dc.identifier.urihttps://diglib.eg.org/handle/10.2312/pg20251290
dc.publisherThe Eurographics Associationen_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectCCS Concepts: Computing methodologies → Parametric curve and surface models; Reconstruction
dc.subjectComputing methodologies → Parametric curve and surface models
dc.subjectReconstruction
dc.titleGenerating 3D Hair Strips from Partial Strands using Diffusion Modelen_US
Files
Original bundle
Now showing 1 - 3 of 3
Loading...
Thumbnail Image
Name:
pg20251290.pdf
Size:
9.19 MB
Format:
Adobe Portable Document Format
Loading...
Thumbnail Image
Name:
paper1242_mm1.pdf
Size:
888.24 KB
Format:
Adobe Portable Document Format
No Thumbnail Available
Name:
paper1242_mm2.mp4
Size:
96.59 MB
Format:
Video MP4