Exploiting Knowledge Distillation for Few-Shot Image Generation

29 Sep 2021  ·  Xingzhong Hou, Boxiao Liu, Fang Wan, Haihang You ·

Few-shot image generation, which trains generative models on limited examples, is of practical importance. The existing pipeline is first pretraining a source model (which contains a generator and a discriminator) on a large-scale dataset and finetuning it on a target domain with limited samples. The main challenge is that the few-shot model easily becomes overfitting. It can be attributed to two aspects: the lack of sample diversity for the generator and the failure of fidelity discrimination for the discriminator. In this paper, we treat the diversity and fidelity in the source model as a kind of knowledge and propose to improve the generation results via exploring knowledge distillation. The source model trained on the large-scale dataset is regarded as teacher model and the target model (student) is learned by introducing momentum relation distillation module to produce diverse samples and source discrimination distillation to ensure the fidelity discrimination. With the momentum relation distillation and source discrimination distillation modules, the proposed method outperforms the state-of-the-art of by a large margin, i.e., 10% for FFHQ to Sketches, while achieving better diversity.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here