Generalization and Stability of GANs: A theory and promise from data augmentation

1 Jan 2021  ·  Khoat Than, Nghia Vu ·

The instability when training generative adversarial networks (GANs) is a notoriously difficult issue, and the generalization of GANs remains open. In this paper, we will analyze various sources of instability which not only come from the discriminator but also the generator. We then point out that the requirement of Lipschitz continuity on both the discriminator and generator leads to generalization and stability for GANs. As a consequence, this work naturally provides a generalization bound for a large class of existing models and explains the success of recent large-scale generators. Finally, we show why data augmentation can ensure Lipschitz continuity on both the discriminator and generator. This work therefore provides a theoretical basis for a simple way to ensure generalization in GANs, explaining the highly successful use of data augmentation for GANs in practice.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here