KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment

14 Oct 2019  ·  Vlad Hosu, Hanhe Lin, Tamas Sziranyi, Dietmar Saupe ·

Deep learning methods for image quality assessment (IQA) are limited due to the small size of existing datasets. Extensive datasets require substantial resources both for generating publishable content and annotating it accurately. We present a systematic and scalable approach to creating KonIQ-10k, the largest IQA dataset to date, consisting of 10,073 quality scored images. It is the first in-the-wild database aiming for ecological validity, concerning the authenticity of distortions, the diversity of content, and quality-related indicators. Through the use of crowdsourcing, we obtained 1.2 million reliable quality ratings from 1,459 crowd workers, paving the way for more general IQA models. We propose a novel, deep learning model (KonCept512), to show an excellent generalization beyond the test set (0.921 SROCC), to the current state-of-the-art database LIVE-in-the-Wild (0.825 SROCC). The model derives its core performance from the InceptionResNet architecture, being trained at a higher resolution than previous models (512x384). Correlation analysis shows that KonCept512 performs similar to having 9 subjective scores for each test image.

PDF Abstract


Introduced in the Paper:


Used in the Paper:

ImageNet YFCC100M MSU NR VQA Database
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Quality Assessment MSU NR VQA Database KonCept512 SRCC 0.8360 # 16
PLCC 0.8464 # 17
KLCC 0.6608 # 16
Type NR # 1