Perceptual quality assessment of user generated content (UGC) videos is challenging due to the requirement of large scale human annotated videos for training.
No-reference (NR) image quality assessment (IQA) is an important tool in enhancing the user experience in diverse visual applications.
In this work, we introduce two novel quality-relevant auxiliary tasks at the batch and sample levels to enable TTA for blind IQA.
Designing learning-based no-reference (NR) video quality assessment (VQA) algorithms for camera-captured videos is cumbersome due to the requirement of a large number of human annotations of quality.
Completely blind video quality assessment (VQA) refers to a class of quality assessment methods that do not use any reference videos, human opinion scores or training videos from the target database to learn a quality model.