No-reference image quality assessment (NR-IQA) aims to measure the image quality without reference image.
Our results show that networks trained to regress to the ground truth targets for labeled data and to simultaneously learn to rank unlabeled data obtain significantly better, state-of-the-art results for both IQA and crowd counting.
This paper uses robust statistics and curvelet transform to learn a general-purpose no-reference (NR) image quality assessment (IQA) model.
While assessing image quality, the filters need to capture perceptual differences based on dissimilarities between a reference image and its distorted version.
In this work, we compare the state of the art quality and content-based spatial pooling strategies and show that although features are the key in any image quality assessment, pooling also matters.
An average observer perceives the world in color instead of black and white.
Moreover, BleSS significantly enhances the performance of SR-SIM and FSIM in the full TID 2013 database.