Single Stage Adaptive Multi-Attention Network for Image Restoration

Recently attention-based networks have been successful for image restoration tasks. However, existing methods are either computationally expensive or have limited receptive fields, adding constraints to the model. They are also less resilient in spatial and contextual aspects and lack pixel-to-pixel correspondence, which may degrade feature representations. In this paper, we propose a novel and computationally efficient architecture Single Stage Adaptive Multi-Attention Network (SSAMAN) for image restoration tasks, particularly for image denoising and image deblurring. SSAMAN efficiently addresses computational challenges and expands receptive fields, enhancing robustness in spatial and contextual feature representation. Its Adaptive Multi-Attention Module (AMAM), which consists of Adaptive Pixel Attention Branch (APAB) and an Adaptive Channel Attention Branch (ACAB), uniquely integrates channel and pixel-wise dimensions, significantly improving sensitivity to edges, shapes, and textures. We perform extensive experiments and ablation studies to validate the performance of SSAMAN. Our model shows state-of-the-art results on various benchmarks, for example, on image denoising tasks, SSAMAN achieves a notable 40.08 dB PSNR on SIDD dataset, outperforming Restormer by 0.06 dB PSNR, with 41.02% less computational cost, and achieves a 40.05 dB PSNR on the DND dataset. For image deblurring, SSAMAN achieves 33.53 dB PSNR on GoPro dataset. Code and models are available at Github.

PDF

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Image Denoising DND SSAMAN PSNR (sRGB) 40.05 # 1
SSIM (sRGB) 0.963 # 1
Image Denoising SIDD SSAMAN PSNR (sRGB) 40.08 # 4
SSIM (sRGB) 0.962 # 4

Methods