Search Results

Now showing 1 - 2 of 2
  • Item
    Depth-aware Neural Style Transfer
    (Association for Computing Machinery, Inc (ACM), 2017) Liu, Xiao-Chang; Cheng, Ming-Ming; Lai, Yu-Kun; Rosin, Paul L.; Holger Winnemoeller and Lyn Bartram
    Neural style transfer has recently received signi cant a ention and demonstrated amazing results. An e cient solution proposed by Johnson et al. trains feed-forward convolutional neural networks by de ning and optimizing perceptual loss functions. Such methods are typically based on high-level features extracted from pre-trained neural networks, where the loss functions contain two components: style loss and content loss. However, such pre-trained networks are originally designed for object recognition, and hence the high-level features o en focus on the primary target and neglect other details. As a result, when input images contain multiple objects potentially at di erent depths, the resulting images are o en unsatisfactory because image layout is destroyed and the boundary between the foreground and background as well as di erent objects becomes obscured. We observe that the depth map e ectively re ects the spatial distribution in an image and preserving the depth map of the content image a er stylization helps produce an image that preserves its semantic content. In this paper, we introduce a novel approach for neural style transfer that integrates depth preservation as additional loss, preserving overall image layout while performing style transfer.
  • Item
    Benchmarking Non-Photorealistic Rendering of Portraits
    (Association for Computing Machinery, Inc (ACM), 2017) Rosin, Paul L.; Mould, David; Berger, Itamar; Collomosse, John; Lai, Yu-Kun; Li, Chuan; Li, Hua; Shamir, Ariel; Wand, Michael; Wang, Tinghuai; Winnem, Holger; Holger Winnemoeller and Lyn Bartram
    We present a set of images for helping NPR practitioners evaluate their image-based portrait stylisation algorithms. Using a standard set both facilitates comparisons with other methods and helps ensure that presented results are representative. We give two levels of di culty, each consisting of 20 images selected systematically so as to provide good coverage of several possible portrait characteristics. We applied three existing portraitspeci c stylisation algorithms, two generalpurpose stylisation algorithms, and one general learn ing based stylisation algorithm to the rst level of the benchmark, corresponding to the type of constrained images that have o ften been used in portraitspeci c work. We found that the existing methods are generally e ective on this new image set, demon strating that level one of the benchmark is tractable; challenges remain at level two. Results revealed several advantages conferred by portraitspeci c algorithms over generalpurpose algorithms: portraitspeci c algorithms can use domainspeci c information to preserve key details such as eyes and to eliminate extraneous details, and they have more scope for semantically meaningful abstraction due to the underlying face model. Finally, we pro vide some thoughts on systematically extending the benchmark to higher levels of di fficulty.