Structure-Preserving Super Resolution with Gradient Guidance

Structures matter in single image super resolution (SISR). Recent studies benefiting from generative adversarial network (GAN) have promoted the development of SISR by recovering photo-realistic images. However, there are always undesired structural distortions in the recovered images. In this paper, we propose a structure-preserving super resolution method to alleviate the above issue while maintaining the merits of GAN-based methods to generate perceptual-pleasant details. Specifically, we exploit gradient maps of images to guide the recovery in two aspects. On the one hand, we restore high-resolution gradient maps by a gradient branch to provide additional structure priors for the SR process. On the other hand, we propose a gradient loss which imposes a second-order restriction on the super-resolved images. Along with the previous image-space loss functions, the gradient-space objectives help generative networks concentrate more on geometric structures. Moreover, our method is model-agnostic, which can be potentially used for off-the-shelf SR networks. Experimental results show that we achieve the best PI and LPIPS performance and meanwhile comparable PSNR and SSIM compared with state-of-the-art perceptual-driven SR methods. Visual results demonstrate our superiority in restoring structures while generating natural SR images.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Super-Resolution BSD100 - 4x upscaling SPSR PSNR 25.505 # 50
SSIM 0.6576 # 46
Perceptual Index 2.351 # 1
LPIPS 0.1611 # 3
Image Super-Resolution Set14 - 4x upscaling SPSR PSNR 26.64 # 54
SSIM 0.7930 # 7
Perceptual Index 2.9036 # 1
LPIPS 0.1318 # 2
Image Super-Resolution Set5 - 4x upscaling SPSR PSNR 30.40 # 56
SSIM 0.8627 # 49
Perceptual Index 3.2743 # 1
LPIPS 0.0644 # 2
Image Super-Resolution Urban100 - 4x upscaling SPSR PSNR 24.799 # 36
SSIM 0.9481 # 1
Perceptual Index 3.5511 # 1
LPIPS 0.1184 # 2


No methods listed for this paper. Add relevant methods here