Two-phase Hair Image Synthesis by Self-Enhancing Generative Model

dc.contributor.authorQiu, Haonanen_US
dc.contributor.authorWang, Chuanen_US
dc.contributor.authorZhu, Hangen_US
dc.contributor.authorzhu, xiangyuen_US
dc.contributor.authorGu, Jinjinen_US
dc.contributor.authorHan, Xiaoguangen_US
dc.contributor.editorLee, Jehee and Theobalt, Christian and Wetzstein, Gordonen_US
dc.date.accessioned2019-10-14T05:08:27Z
dc.date.available2019-10-14T05:08:27Z
dc.date.issued2019
dc.description.abstractGenerating plausible hair image given limited guidance, such as sparse sketches or low-resolution image, has been made possible with the rise of Generative Adversarial Networks (GANs). Traditional image-to-image translation networks can generate recognizable results, but finer textures are usually lost and blur artifacts commonly exist. In this paper, we propose a two-phase generative model for high-quality hair image synthesis. The two-phase pipeline first generates a coarse image by an existing image translation model, then applies a re-generating network with self-enhancing capability to the coarse image. The selfenhancing capability is achieved by a proposed differentiable layer, which extracts the structural texture and orientation maps from a hair image. Extensive experiments on two tasks, Sketch2Hair and Hair Super-Resolution, demonstrate that our approach is able to synthesize plausible hair image with finer details, and reaches the state-of-the-art.en_US
dc.description.number7
dc.description.sectionheadersGenerative Models
dc.description.seriesinformationComputer Graphics Forum
dc.description.volume38
dc.identifier.doi10.1111/cgf.13847
dc.identifier.issn1467-8659
dc.identifier.pages403-412
dc.identifier.urihttps://doi.org/10.1111/cgf.13847
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf13847
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.titleTwo-phase Hair Image Synthesis by Self-Enhancing Generative Modelen_US
Files
Collections