Kikuchi, TakazumiEndo, YukiKanamori, YoshihiroHashimoto, TaisukeMitani, JunJernej Barbic and Wen-Chieh Lin and Olga Sorkine-Hornung2017-10-162017-10-162017978-3-03868-051-2https://doi.org/10.2312/pg.20171317https://diglib.eg.org:443/handle/10.2312/pg20171317Human parsing is a fundamental task to estimate semantic parts in a human image such as face, arm, leg, hat, and dress. Recent deep-learning based methods have achieved significant improvements, but collecting training datasets of pixel-wise annotations is labor-intensive. In this paper, we propose two solutions to cope with limited dataset. First, to handle various poses, we incorporate a pose estimation network into an end-to-end human parsing network in order to transfer common features across the domains. The pose estimation network can be trained using rich datasets and feed valuable features to the human parsing network. Second, to handle complicated backgrounds, we increase the variations of background images automatically by replacing the original backgrounds of human images with those obtained from large-scale scenery image datasets. While each of the two solutions is versatile and beneficial to human parsing, their combination yields further improvement.Computing methodologiesImage segmentationImage processingTransferring Pose and Augmenting Background Variation for Deep Human Image Parsing10.2312/pg.201713177-12