Learning the Non-Differentiable Optimization for Blind Super-Resolution

CVPR 2021  ·  Zheng Hui, Jie Li, Xiumei Wang, Xinbo Gao ·

Previous convolutional neural network (CNN) based blind super-resolution (SR) methods usually adopt an iterative optimization way to approximate the ground-truth (GT) step-by-step. This solution always involves more computational costs to bring about time-consuming inference. At present, most blind SR algorithms are dedicated to obtaining high-fidelity results; their loss function generally employs L1 loss. To further improve the visual quality of SR results, perceptual metric, such as NIQE, is necessary to guide the network optimization. However, due to the non-differentiable property of NIQE, it cannot be as the loss function. Towards these issues, we propose an adaptive modulation network (AMNet) for multiple degradations SR, which is composed of the pivotal adaptive modulation layer (AMLayer). It is an efficient yet lightweight fusion layer between blur kernel and image features. Equipped with the blur kernel predictor, we naturally upgrade the AMNet to the blind SR model. Instead of considering iterative strategy, we make the blur kernel predictor trainable in the whole blind SR model, in which AMNet is well-trained. Also, we fit deep reinforcement learning into the blind SR model (AMNet-RL) to tackle the non-differentiable optimization problem. Specifically, the blur kernel predictor will be the actor to estimate the blur kernel from the input low-resolution (LR) image. The reward is designed by the pre-defined differentiable or non-differentiable metric. Extensive experiments show that our model can outperform state-of-the-art methods in both fidelity and perceptual metrics.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here