Paper

MAMNet: Multi-path Adaptive Modulation Network for Image Super-Resolution

In recent years, single image super-resolution (SR) methods based on deep convolutional neural networks (CNNs) have made significant progress. However, due to the non-adaptive nature of the convolution operation, they cannot adapt to various characteristics of images, which limits their representational capability and, consequently, results in unnecessarily large model sizes. To address this issue, we propose a novel multi-path adaptive modulation network (MAMNet). Specifically, we propose a multi-path adaptive modulation block (MAMB), which is a lightweight yet effective residual block that adaptively modulates residual feature responses by fully exploiting their information via three paths. The three paths model three types of information suitable for SR: 1) channel-specific information (CSI) using global variance pooling, 2) inter-channel dependencies (ICD) based on the CSI, 3) and channel-specific spatial dependencies (CSD) via depth-wise convolution. We demonstrate that the proposed MAMB is effective and parameter-efficient for image SR than other feature modulation methods. In addition, experimental results show that our MAMNet outperforms most of the state-of-the-art methods with a relatively small number of parameters.

Results in Papers With Code
(↓ scroll down to see all results)