DGL-GAN: Discriminator Guided Learning for GAN Compression

13 Dec 2021  ·  Yuesong Tian, Li Shen, Xiang Tian, DaCheng Tao, Zhifeng Li, Wei Liu, Yaowu Chen ·

Generative Adversarial Networks (GANs) with high computation costs, e.g., BigGAN and StyleGAN2, have achieved remarkable results in synthesizing high-resolution images from random noise. Reducing the computation cost of GANs while keeping generating photo-realistic images is a challenging field. In this work, we propose a novel yet simple {\bf D}iscriminator {\bf G}uided {\bf L}earning approach for compressing vanilla {\bf GAN}, dubbed {\bf DGL-GAN}. Motivated by the phenomenon that the teacher discriminator may contain some meaningful information about both real images and fake images, we merely transfer the knowledge from the teacher discriminator via the adversarial interaction between the teacher discriminator and the student generator. We apply DGL-GAN to compress the two most representative large-scale vanilla GANs, i.e., StyleGAN2 and BigGAN. Experiments show that DGL-GAN achieves state-of-the-art (SOTA) results on both StyleGAN2 and BigGAN. Moreover, DGL-GAN is also effective in boosting the performance of original uncompressed GANs. Original uncompressed StyleGAN2 boosted with DGL-GAN achieves FID 2.65 on FFHQ, which achieves a new state-of-the-art performance. Code and models are available at \url{https://github.com/yuesongtian/DGL-GAN}

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods