Unified Quality Assessment of In-the-Wild Videos with Mixed Datasets Training

9 Nov 2020  ·  Dingquan Li, Tingting Jiang, Ming Jiang ·

Video quality assessment (VQA) is an important problem in computer vision. The videos in computer vision applications are usually captured in the wild. We focus on automatically assessing the quality of in-the-wild videos, which is a challenging problem due to the absence of reference videos, the complexity of distortions, and the diversity of video contents. Moreover, the video contents and distortions among existing datasets are quite different, which leads to poor performance of data-driven methods in the cross-dataset evaluation setting. To improve the performance of quality assessment models, we borrow intuitions from human perception, specifically, content dependency and temporal-memory effects of human visual system. To face the cross-dataset evaluation challenge, we explore a mixed datasets training strategy for training a single VQA model with multiple datasets. The proposed unified framework explicitly includes three stages: relative quality assessor, nonlinear mapping, and dataset-specific perceptual scale alignment, to jointly predict relative quality, perceptual quality, and subjective quality. Experiments are conducted on four publicly available datasets for VQA in the wild, i.e., LIVE-VQC, LIVE-Qualcomm, KoNViD-1k, and CVD2014. The experimental results verify the effectiveness of the mixed datasets training strategy and prove the superior performance of the unified model in comparison with the state-of-the-art models. For reproducible research, we make the PyTorch implementation of our method available at https://github.com/lidq92/MDTVSFA.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Quality Assessment MSU NR VQA Database MDTVSFA SRCC 0.9289 # 1
PLCC 0.9431 # 1
KLCC 0.7883 # 1
Type NR # 1
Video Quality Assessment MSU SR-QA Dataset MDTVSFA SROCC 0.60193 # 19
PLCC 0.61821 # 13
KLCC 0.48406 # 21
Type NR # 1

Methods