Video Quality Assessment

102 papers with code • 10 benchmarks • 12 datasets

Video Quality Assessment is a computer vision task aiming to mimic video-based human subjective perception. The goal is to produce a mos score, where higher score indicates better perceptual quality. Some well-known benchmarks for this task are KoNViD-1k, LIVE-VQC, YouTube-UGC and LSVQ. SROCC/PLCC/RMSE are usually used to evaluate the performance of different models.

Libraries

Use these libraries to find Video Quality Assessment models and implementations

Most implemented papers

Towards Deep Learning Models Resistant to Adversarial Attacks

MadryLab/mnist_challenge ICLR 2018

Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal.

The Unreasonable Effectiveness of Deep Features as a Perceptual Metric

richzhang/PerceptualSimilarity CVPR 2018

We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics.

Visualizing and Understanding Convolutional Networks

pytorch/captum 12 Nov 2013

Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark.

NIMA: Neural Image Assessment

idealo/image-quality-assessment 15 Sep 2017

Automatically learned quality assessment for images has recently become a hot topic due to its usefulness in a wide variety of applications such as evaluating image capture pipelines, storage techniques and sharing media.

The 2018 PIRM Challenge on Perceptual Image Super-resolution

alterzero/DBPN-Pytorch 20 Sep 2018

This paper reports on the 2018 PIRM challenge on perceptual super-resolution (SR), held in conjunction with the Perceptual Image Restoration and Manipulation (PIRM) workshop at ECCV 2018.

UGC-VQA: Benchmarking Blind Video Quality Assessment for User Generated Content

tu184044109/VIDEVAL_release 29 May 2020

Recent years have witnessed an explosion of user-generated content (UGC) videos shared and streamed over the Internet, thanks to the evolution of affordable and reliable consumer capture devices, and the tremendous popularity of social media platforms.

FAST-VQA: Efficient End-to-end Video Quality Assessment with Fragment Sampling

timothyhtimothy/fast-vqa 6 Jul 2022

Consisting of fragments and FANet, the proposed FrAgment Sample Transformer for VQA (FAST-VQA) enables efficient end-to-end deep VQA and learns effective video-quality-related representations.

Neighbourhood Representative Sampling for Efficient End-to-end Video Quality Assessment

QualityAssessment/FAST-VQA-and-FasterVQA 11 Oct 2022

On the other hand, existing practices, such as resizing and cropping, will change the quality of original videos due to the loss of details and contents, and are therefore harmful to quality assessment.

Test-Time Training with Self-Supervision for Generalization under Distribution Shifts

yueatsprograms/ttt_cifar_release 29 Sep 2019

In this paper, we propose Test-Time Training, a general approach for improving the performance of predictive models when training and test data come from different distributions.

Exploring Video Quality Assessment on User Generated Contents from Aesthetic and Technical Perspectives

vqassessment/dover ICCV 2023

In light of this, we propose the Disentangled Objective Video Quality Evaluator (DOVER) to learn the quality of UGC videos based on the two perspectives.