Zhu, XiaobinLi, ZhuangziZhang, XiaoyuLi, HaishengXue, ZiyuWang, LeiFu, Hongbo and Ghosh, Abhijeet and Kopf, Johannes2018-10-072018-10-0720181467-8659https://doi.org/10.1111/cgf.13568https://diglib.eg.org:443/handle/10.1111/cgf13568Recently, image super-resolution works based on Convolutional Neural Networks (CNNs) and Generative Adversarial Nets (GANs) have shown promising performance. However, these methods tend to generate blurry and over-smoothed super-resolved (SR) images, due to the incomplete loss function and powerless architectures of networks. In this paper, a novel generative adversarial image super-resolution through deep dense skip connections (GSR-DDNet), is proposed to solve the above-mentioned problems. It aims to take advantage of GAN's ability of modeling data distributions, so that GSR-DDNet can select informative feature representation and model the mapping across the low-quality and high-quality images in an adversarial way. The pipeline of the proposed method consists of three main components: 1) The generator of a novel dense skip connection network with the deep structure for learning robust mapping function is proposed to generate SR images from low-resolution images; 2) The feature extraction network based on VGG-19 is adopted to capture high frequency feature maps for content loss; and 3) The discriminator with Wasserstein distance is adopted to identify the overall style of SR and ground-truth images. Experiments conducted on four publicly available datasets demonstrate the superiority against the state-of-the-art methods.Computing methodologiesImage processingComputer systems organizationNeural networksGenerative Adversarial Image Super-Resolution Through Deep Dense Skip Connections10.1111/cgf.13568289-300