Image Super-Resolution by Neural Texture Transfer

CVPR 2019  ยท  Zhifei Zhang, Zhaowen Wang, Zhe Lin, Hairong Qi ยท

Due to the significant information loss in low-resolution (LR) images, it has become extremely challenging to further advance the state-of-the-art of single image super-resolution (SISR). Reference-based super-resolution (RefSR), on the other hand, has proven to be promising in recovering high-resolution (HR) details when a reference (Ref) image with similar content as that of the LR input is given. However, the quality of RefSR can degrade severely when Ref is less similar. This paper aims to unleash the potential of RefSR by leveraging more texture details from Ref images with stronger robustness even when irrelevant Ref images are provided. Inspired by the recent work on image stylization, we formulate the RefSR problem as neural texture transfer. We design an end-to-end deep model which enriches HR details by adaptively transferring the texture from Ref images according to their textural similarity. Instead of matching content in the raw pixel space as done by previous methods, our key contribution is a multi-level matching conducted in the neural space. This matching scheme facilitates multi-scale neural transfer that allows the model to benefit more from those semantically related Ref patches, and gracefully degrade to SISR performance on the least relevant Ref inputs. We build a benchmark dataset for the general research of RefSR, which contains Ref images paired with LR inputs with varying levels of similarity. Both quantitative and qualitative evaluations demonstrate the superiority of our method over state-of-the-art.

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Super-Resolution CUFED5 - 4x upscaling SRNTT-l2 PSNR 26.24 # 1
Image Super-Resolution Sun80 - 4x upscaling SRNTT-l2 PSNR 28.54 # 1
Image Super-Resolution Urban100 - 4x upscaling SRNTT-l2 PSNR 25.5 # 37

Methods


No methods listed for this paper. Add relevant methods here