Regularizing Generative Adversarial Networks under Limited Data

Recent years have witnessed the rapid progress of generative adversarial networks (GANs). However, the success of the GAN models hinges on a large amount of training data. This work proposes a regularization approach for training robust GAN models on limited data. We theoretically show a connection between the regularized loss and an f-divergence called LeCam-divergence, which we find is more robust under limited training data. Extensive experiments on several benchmark datasets demonstrate that the proposed regularization scheme 1) improves the generalization performance and stabilizes the learning dynamics of GAN models under limited training data, and 2) complements the recent data augmentation methods. These properties facilitate training GAN models to achieve state-of-the-art performance when only limited training data of the ImageNet benchmark is available.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Generation 25% ImageNet 128x128 LeCAM + DA FID 11.16 # 1
IS 84.7 # 1
Image Generation CAT 256x256 StyleGAN2 + DA + RLC (Ours) FID 10.16 # 1
Image Generation CIFAR-10 LeCAM (BigGAN + DA) FID 8.46 # 38
Image Generation CIFAR-10 LeCAM (StyleGAN2 + ADA) FID 2.47 # 16
Image Generation CIFAR-100 LeCAM (StyleGAN2 + ADA) FID 2.99 # 1
Image Generation CIFAR-100 LeCAM (BigGAN + DA) FID 11.2 # 3
Image Generation FFHQ 256 x 256 LeCAM (StyleGAN2 + ADA) FID 3.49 # 5
Image Generation ImageNet - 10% labeled data LeCAM + DA FID 24.38 # 1
IS 42.3 # 1
Image Generation ImageNet 128x128 LeCAM + DA FID 6.54 # 6
IS 108 # 5

Methods