Video Quality Assessment
102 papers with code • 10 benchmarks • 12 datasets
Video Quality Assessment is a computer vision task aiming to mimic video-based human subjective perception. The goal is to produce a mos score, where higher score indicates better perceptual quality. Some well-known benchmarks for this task are KoNViD-1k, LIVE-VQC, YouTube-UGC and LSVQ. SROCC/PLCC/RMSE are usually used to evaluate the performance of different models.
Libraries
Use these libraries to find Video Quality Assessment models and implementationsDatasets
Most implemented papers
Towards Deep Learning Models Resistant to Adversarial Attacks
Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal.
The Unreasonable Effectiveness of Deep Features as a Perceptual Metric
We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics.
Visualizing and Understanding Convolutional Networks
Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark.
NIMA: Neural Image Assessment
Automatically learned quality assessment for images has recently become a hot topic due to its usefulness in a wide variety of applications such as evaluating image capture pipelines, storage techniques and sharing media.
The 2018 PIRM Challenge on Perceptual Image Super-resolution
This paper reports on the 2018 PIRM challenge on perceptual super-resolution (SR), held in conjunction with the Perceptual Image Restoration and Manipulation (PIRM) workshop at ECCV 2018.
UGC-VQA: Benchmarking Blind Video Quality Assessment for User Generated Content
Recent years have witnessed an explosion of user-generated content (UGC) videos shared and streamed over the Internet, thanks to the evolution of affordable and reliable consumer capture devices, and the tremendous popularity of social media platforms.
FAST-VQA: Efficient End-to-end Video Quality Assessment with Fragment Sampling
Consisting of fragments and FANet, the proposed FrAgment Sample Transformer for VQA (FAST-VQA) enables efficient end-to-end deep VQA and learns effective video-quality-related representations.
Neighbourhood Representative Sampling for Efficient End-to-end Video Quality Assessment
On the other hand, existing practices, such as resizing and cropping, will change the quality of original videos due to the loss of details and contents, and are therefore harmful to quality assessment.
Test-Time Training with Self-Supervision for Generalization under Distribution Shifts
In this paper, we propose Test-Time Training, a general approach for improving the performance of predictive models when training and test data come from different distributions.
Exploring Video Quality Assessment on User Generated Contents from Aesthetic and Technical Perspectives
In light of this, we propose the Disentangled Objective Video Quality Evaluator (DOVER) to learn the quality of UGC videos based on the two perspectives.