|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models.
Ranked #10 on Conditional Image Generation on CIFAR-10
Automatically learned quality assessment for images has recently become a hot topic due to its usefulness in a wide variety of applications such as evaluating image capture pipelines, storage techniques and sharing media.
Ranked #4 on Aesthetics Quality Assessment on AVA
Our results show that networks trained to regress to the ground truth targets for labeled data and to simultaneously learn to rank unlabeled data obtain significantly better, state-of-the-art results for both IQA and crowd counting.
Furthermore, on the LIVE benchmark we show that our approach is superior to existing NR-IQA techniques and that we even outperform the state-of-the-art in full-reference IQA (FR-IQA) methods without having to resort to high-quality reference images to infer IQA.
While several reviews for GANs have been presented to date, none have considered the status of this field based on their progress towards addressing practical challenges relevant to computer vision.
Face image quality is an important factor to enable high performance face recognition systems.
Ranked #1 on Face Quality Assessement on LFW
We present a deep neural network-based approach to image quality assessment (IQA).
The performance of objective image quality assessment (IQA) models has been evaluated primarily by comparing model predictions to human quality judgments.
Objective measures of image quality generally operate by comparing pixels of a "degraded" image to those of the original.