Video Quality Assessment
68 papers with code • 9 benchmarks • 11 datasets
Video Quality Assessment is a computer vision task aiming to mimic video-based human subjective perception. The goal is to produce a mos score, where higher score indicates better perceptual quality. Some well-known benchmarks for this task are KoNViD-1k, LIVE-VQC, YouTube-UGC and LSVQ. SROCC/PLCC/RMSE are usually used to evaluate the performance of different models.
Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal.
We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics.
Recent years have witnessed an explosion of user-generated content (UGC) videos shared and streamed over the Internet, thanks to the evolution of affordable and reliable consumer capture devices, and the tremendous popularity of social media platforms.
Consisting of fragments and FANet, the proposed FrAgment Sample Transformer for VQA (FAST-VQA) enables efficient end-to-end deep VQA and learns effective video-quality-related representations.
On the other hand, existing practices, such as resizing and cropping, will change the quality of original videos due to the loss of details and contents, and are therefore harmful to quality assessment.
In this paper, we propose Test-Time Training, a general approach for improving the performance of predictive models when training and test data come from different distributions.
Exploring Video Quality Assessment on User Generated Contents from Aesthetic and Technical Perspectives
In light of this, we propose the Disentangled Objective Video Quality Evaluator (DOVER) to learn the quality of UGC videos based on the two perspectives.