Depth-aware Neural Style Transfer
dc.contributor.author | Liu, Xiao-Chang | en_US |
dc.contributor.author | Cheng, Ming-Ming | en_US |
dc.contributor.author | Lai, Yu-Kun | en_US |
dc.contributor.author | Rosin, Paul L. | en_US |
dc.contributor.editor | Holger Winnemoeller and Lyn Bartram | en_US |
dc.date.accessioned | 2017-10-18T08:42:16Z | |
dc.date.available | 2017-10-18T08:42:16Z | |
dc.date.issued | 2017 | |
dc.description.abstract | Neural style transfer has recently received signi cant a ention and demonstrated amazing results. An e cient solution proposed by Johnson et al. trains feed-forward convolutional neural networks by de ning and optimizing perceptual loss functions. Such methods are typically based on high-level features extracted from pre-trained neural networks, where the loss functions contain two components: style loss and content loss. However, such pre-trained networks are originally designed for object recognition, and hence the high-level features o en focus on the primary target and neglect other details. As a result, when input images contain multiple objects potentially at di erent depths, the resulting images are o en unsatisfactory because image layout is destroyed and the boundary between the foreground and background as well as di erent objects becomes obscured. We observe that the depth map e ectively re ects the spatial distribution in an image and preserving the depth map of the content image a er stylization helps produce an image that preserves its semantic content. In this paper, we introduce a novel approach for neural style transfer that integrates depth preservation as additional loss, preserving overall image layout while performing style transfer. | en_US |
dc.description.sectionheaders | Style Transfer | |
dc.description.seriesinformation | Non-Photorealistic Animation and Rendering | |
dc.identifier.doi | 10.1145/3092919.3092924 | |
dc.identifier.isbn | 978-1-4503-5081-5 | |
dc.identifier.issn | - | |
dc.identifier.uri | https://doi.org/10.1145/3092919.3092924 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/npar2017a04 | |
dc.publisher | Association for Computing Machinery, Inc (ACM) | en_US |
dc.subject | Computing methodologies | |
dc.subject | Image manipulation | |
dc.subject | Computational photography | |
dc.subject | Non photorealistic rendering | |
dc.subject | deep learning | |
dc.subject | depth | |
dc.title | Depth-aware Neural Style Transfer | en_US |