Boosting Lightweight Single Image Super-resolution via Joint-distillation

ACM 2021  ·  Xiaotong Luo, Qiuyuan Liang, Ding Liu, Yanyun Qu ·

The rising of deep learning has facilitated the development of single image super-resolution (SISR). However, the growing burdensome model complexity and memory occupation severely hinder its practical deployments on resource-limited devices. In this paper, we propose a novel joint-distillation (JDSR) framework to boost the representation of various off-the-shelf lightweight SR models. The framework includes two stages: the superior LR generation and the joint-distillation learning. The superior LR is obtained from the HR image itself. With less than $300$K parameters, the peer network using superior LR as input can achieve comparable SR performance with large models, e.g., RCAN, with 15M parameters, which enables it as the input of peer network to save the training expense. The joint-distillation learning consists of internal self-distillation and external mutual learning. The internal self-distillation aims to achieve model self-boosting by transferring the knowledge from the deeper SR output to the shallower one. Specifically, each intermediate SR output is supervised by the HR image and the soft label from subsequent deeper outputs. To shrink the capacity gap between shallow and deep layers, a soft label generator is designed in a progressive backward fusion way with meta-learning for adaptive weight fine-tuning. The external mutual learning focuses on obtaining interaction information from a peer network in the process. Moreover, a curriculum learning strategy and a performance gap threshold are introduced for balancing the convergence rate of the original SR model and its peer network. Comprehensive experiments on benchmark datasets demonstrate that our proposal improves the performance of recent lightweight SR models by a large margin, with the same model architecture and inference expense.

PDF

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here