Data-Free Knowledge Distillation for Image Super-Resolution

Convolutional network compression methods require training data for achieving acceptable results, but training data is routinely unavailable due to some privacy and transmission limitations. Therefore, recent works focus on learning efficient networks without original training data, i.e., data-free model compression. Wherein, most of existing algorithms are developed for image recognition or segmentation tasks. In this paper, we study the data-free compression approach for single image super-resolution (SISR) task which is widely used in mobile phones and smart cameras. Specifically, we analyze the relationship between the outputs and inputs from the pre-trained network and explore a generator with a series of loss functions for maximally capturing useful information. The generator is then trained for synthesizing training samples which have similar distribution to that of the original data. To further alleviate the training difficulty of the student network using only the synthetic data, we introduce a progressive distillation scheme. Experiments on various datasets and architectures demonstrate that the proposed method is able to be utilized for effectively learning portable student networks without the original data, e.g., with 0.16dB PSNR drop on Set5 for x2 super resolution. Code will be available at https://github.com/huaweinoah/Data-Efficient-Model-Compression.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here