Paper

On Catastrophic Forgetting and Mode Collapse in Generative Adversarial Networks

In this paper, we show that Generative Adversarial Networks (GANs) suffer from catastrophic forgetting even when they are trained to approximate a single target distribution. We show that GAN training is a continual learning problem in which the sequence of changing model distributions is the sequence of tasks to the discriminator... (read more)

Results in Papers With Code
(↓ scroll down to see all results)