Recursive Generalization Transformer for Image Super-Resolution

11 Mar 2023  ·  Zheng Chen, Yulun Zhang, Jinjin Gu, Linghe Kong, Xiaokang Yang ·

Transformer architectures have exhibited remarkable performance in image super-resolution (SR). Since the quadratic computational complexity of the self-attention (SA) in Transformer, existing methods tend to adopt SA in a local region to reduce overheads. However, the local design restricts the global context exploitation, which is crucial for accurate image reconstruction. In this work, we propose the Recursive Generalization Transformer (RGT) for image SR, which can capture global spatial information and is suitable for high-resolution images. Specifically, we propose the recursive-generalization self-attention (RG-SA). It recursively aggregates input features into representative feature maps, and then utilizes cross-attention to extract global information. Meanwhile, the channel dimensions of attention matrices (query, key, and value) are further scaled to mitigate the redundancy in the channel domain. Furthermore, we combine the RG-SA with local self-attention to enhance the exploitation of the global context, and propose the hybrid adaptive integration (HAI) for module integration. The HAI allows the direct and effective fusion between features at different levels (local or global). Extensive experiments demonstrate that our RGT outperforms recent state-of-the-art methods quantitatively and qualitatively. Code and pre-trained models are available at https://github.com/zhengchen1999/RGT.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Super-Resolution Manga109 - 4x upscaling RGT+ PSNR 32.68 # 6
SSIM 0.9303 # 5
Image Super-Resolution Manga109 - 4x upscaling RGT PSNR 32.50 # 9
SSIM 0.9291 # 7
Image Super-Resolution Set14 - 4x upscaling RGT+ PSNR 29.28 # 7
SSIM 0.7979 # 10
Image Super-Resolution Set14 - 4x upscaling RGT PSNR 29.23 # 9
SSIM 0.7972 # 13

Methods