Li, YaoYu, Xiang GangHan, Xiao GuangJiang, Nian JuanJia, KuiLu, Jiang BoLee, Sung-hee and Zollmann, Stefanie and Okabe, Makoto and Wuensche, Burkhard2020-10-292020-10-292020978-3-03868-120-5https://doi.org/10.2312/pg.20201224https://diglib.eg.org:443/handle/10.2312/pg20201224In this work, we propose an interactive system to design diverse high-quality garment images from fashion sketches and the texture information. The major challenge behind this system is to generate high-quality and detailed texture according to the user-provided texture information. Prior works mainly use the texture patch representation and try to map a small texture patch to a whole garment image, hence unable to generate high-quality details. In contrast, inspired by intrinsic image decomposition, we decompose this task into texture synthesis and shading enhancement. In particular, we propose a novel bi-colored edge texture representation to synthesize textured garment images and a shading enhancer to render shading based on the grayscale edges. The bi-colored edge representation provides simple but effective texture cues and color constraints, so that the details can be better reconstructed. Moreover, with the rendered shading, the synthesized garment image becomes more vivid.NetworksNetwork reliabilityComputing methodologiesComputer visionA Deep Learning Based Interactive Sketching System for Fashion Images Design10.2312/pg.2020122413-18