MAMIQA: No-Reference Image Quality Assessment Based on Multiscale Attention Mechanism With Natural Scene Statistics

No-Reference Image Quality Assessment aims to evaluate the perceptual quality of an image, according to human perception. Many recent studies use Transformers to assign different self-attention mechanisms to distinguish regions of an image, simulating the perception of the human visual system (HVS). However, the quadratic computational complexity caused by the self-attention mechanism is time-consuming and expensive. Meanwhile, the image resizing in the feature extraction stage loses the full-size image quality. To address these issues, we propose a lightweight attention mechanism using decomposed large-kernel convolutions to extract multiscale features, and a novel feature enhancement module to simulate HVS. We also propose to compensate the information loss caused by image resizing, with supplementary features from natural scene statistics. Experimental results on five standard datasets show that the proposed method surpasses the SOTA, while significantly reducing the computational costs.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here