We aim at advancing blind image quality assessment (BIQA), which predicts the human perception of image quality without any reference information.
No-reference image quality assessment (NR-IQA) aims to quantify how humans perceive visual distortions of digital images without access to their undistorted references.
The inaccessibility of reference videos with pristine quality and the complexity of authentic distortions pose great challenges for this kind of blind video quality assessment (BVQA) task.
Ranked #4 on Video Quality Assessment on MSU NR VQA Database
In this paper, we present a simple yet effective continual learning method for BIQA with improved quality prediction accuracy, plasticity-stability trade-off, and task-order/-length robustness.
In this paper, we formulate continual learning for BIQA, where a model learns continually from a stream of IQA datasets, building on what was learned from previously seen data.
We then propose to recursively alternate the learning schemes of imitation and exploration to narrow the discrepancy between training and inference.
Nevertheless, due to the distributional shift between images simulated in the laboratory and captured in the wild, models trained on databases with synthetic distortions remain particularly weak at handling realistic distortions (and vice versa).
We propose a deep bilinear model for blind image quality assessment (BIQA) that handles both synthetic and authentic distortions.
Ranked #2 on Video Quality Assessment on MSU NR VQA Database
Computational models for blind image quality assessment (BIQA) are typically trained in well-controlled laboratory environments with limited generalizability to realistically distorted images.