Image Quality Assessment using Contrastive Learning

25 Oct 2021  ·  Pavan C. Madhusudana, Neil Birkbeck, Yilin Wang, Balu Adsumilli, Alan C. Bovik ·

We consider the problem of obtaining image quality representations in a self-supervised manner. We use prediction of distortion type and degree as an auxiliary task to learn features from an unlabeled image dataset containing a mixture of synthetic and realistic distortions. We then train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem. We refer to the proposed training framework and resulting deep IQA model as the CONTRastive Image QUality Evaluator (CONTRIQUE). During evaluation, the CNN weights are frozen and a linear regressor maps the learned representations to quality scores in a No-Reference (NR) setting. We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models, even without any additional fine-tuning of the CNN backbone. The learned representations are highly robust and generalize well across images afflicted by either synthetic or authentic distortions. Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets. The implementations used in this paper are available at \url{}.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
No-Reference Image Quality Assessment CSIQ CONTRIQUE SRCC 0.942 # 4
PLCC 0.955 # 4
No-Reference Image Quality Assessment KADID-10k CONTRIQUE SRCC 0.934 # 1
PLCC 0.937 # 1
No-Reference Image Quality Assessment TID2013 CONTRIQUE SRCC 0.843 # 3
PLCC 0.857 # 6


No methods listed for this paper. Add relevant methods here