Channel Attention and Multi-level Features Fusion for Single Image Super-Resolution

16 Oct 2018  ·  Yue Lu, Yun Zhou, Zhuqing Jiang, Xiaoqiang Guo, Zixuan Yang ·

Convolutional neural networks (CNNs) have demonstrated superior performance in super-resolution (SR). However, most CNN-based SR methods neglect the different importance among feature channels or fail to take full advantage of the hierarchical features. To address these issues, this paper presents a novel recursive unit. Firstly, at the beginning of each unit, we adopt a compact channel attention mechanism to adaptively recalibrate the channel importance of input features. Then, the multi-level features, rather than only deep-level features, are extracted and fused. Additionally, we find that it will force our model to learn more details by using the learnable upsampling method (i.e., transposed convolution) only on residual branch (instead of using it both on residual branch and identity branch) while using the bicubic interpolation on the other branch. Analytic experiments show that our method achieves competitive results compared with the state-of-the-art methods and maintains faster speed as well.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods