Liu, Xiao-ChangCheng, Ming-MingLai, Yu-KunRosin, Paul L.Holger Winnemoeller and Lyn Bartram2017-10-182017-10-182017978-1-4503-5081-5-https://doi.org/10.1145/3092919.3092924https://diglib.eg.org:443/handle/10.2312/npar2017a04Neural style transfer has recently received signi cant a ention and demonstrated amazing results. An e cient solution proposed by Johnson et al. trains feed-forward convolutional neural networks by de ning and optimizing perceptual loss functions. Such methods are typically based on high-level features extracted from pre-trained neural networks, where the loss functions contain two components: style loss and content loss. However, such pre-trained networks are originally designed for object recognition, and hence the high-level features o en focus on the primary target and neglect other details. As a result, when input images contain multiple objects potentially at di erent depths, the resulting images are o en unsatisfactory because image layout is destroyed and the boundary between the foreground and background as well as di erent objects becomes obscured. We observe that the depth map e ectively re ects the spatial distribution in an image and preserving the depth map of the content image a er stylization helps produce an image that preserves its semantic content. In this paper, we introduce a novel approach for neural style transfer that integrates depth preservation as additional loss, preserving overall image layout while performing style transfer.Computing methodologiesImage manipulationComputational photographyNon photorealistic renderingdeep learningdepthDepth-aware Neural Style Transfer10.1145/3092919.3092924