VMAF And Variants: Towards A Unified VQA

13 Mar 2021  ·  Pankaj Topiwala, Wei Dai, Jiangfeng Pian, Katalina Biondi, Arvind Krovvidi ·

Video quality assessment (VQA) is now a fast-growing subject, maturing in the full reference (FR) case, yet challenging in the exploding no reference (NR) case. We investigate variants of the popular VMAF video quality assessment algorithm for the FR case, using both support vector regression and feedforward neural networks. We extend it to the NR case, using some different features but similar learning, to develop a partially unified framework for VQA. When fully trained, FR algorithms such as VMAF perform very well on test datasets, reaching 90%+ match in PCC and SRCC; but for predicting performance in the wild, we train/test from scratch for each database. With an 80/20 train/test split, we still achieve about 90% performance on average in both PCC and SRCC, with up to 7-9% gains over VMAF, using an improved motion feature and better regression. Moreover, we even get decent performance (about 75%) if we ignore the reference, treating FR as NR, partly justifying our attempts at unification. In the true NR case, we reduce complexity vs. leading recent algorithms VIDEVAL, RAPIQUE, yet achieve performance within 3-5%. Moreover, we develop a method to analyze the saliency of features, and conclude that for both VIDEVAL and RAPIQUE, a small subset of their features are providing the bulk of the performance. In short, we find encouraging improvements in trainability in FR, while constraining training complexity against leading methods in NR, elucidating the saliency of features for feature selection.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here