Deep Decomposition and Bilinear Pooling Network for Blind Night-Time Image Quality Evaluation

12 May 2022  ·  Qiuping Jiang, Jiawu Xu, Yudong Mao, Wei Zhou, Xiongkuo Min, Guangtao Zhai ·

Blind image quality assessment (BIQA), which aims to accurately predict the image quality without any pristine reference information, has been extensively concerned in the past decades. Especially, with the help of deep neural networks, great progress has been achieved. However, it remains less investigated on BIQA for night-time images (NTIs) which usually suffers from complicated authentic distortions such as reduced visibility, low contrast, additive noises, and color distortions. These diverse authentic degradations particularly challenges the design of effective deep neural network for blind NTI quality evaluation (NTIQE). In this paper, we propose a novel deep decomposition and bilinear pooling network (DDB-Net) to better address this issue. The DDB-Net contains three modules, i.e., an image decomposition module, a feature encoding module, and a bilinear pooling module. The image decomposition module is inspired by the Retinex theory and involves decoupling the input NTI into an illumination layer component responsible for illumination information and a reflection layer component responsible for content information. Then, the feature encoding module involves learning feature representations of degradations that are rooted in the two decoupled components separately. Finally, by modeling illumination-related and content-related degradations as two-factor variations, the two feature sets are bilinearly pooled together to form a unified representation for quality prediction. The superiority of the proposed DDB-Net has been well validated by extensive experiments on several benchmark datasets. The source code will be made available soon.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here