GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium

NeurIPS 2017 Martin Heusel • Hubert Ramsauer • Thomas Unterthiner • Bernhard Nessler • Sepp Hochreiter

Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible. However, the convergence of GAN training has still not been proved. We propose a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions.

Full paper

Evaluation


Task Dataset Model Metric name Metric value Global rank Compare
Image Generation CIFAR-10 WGAN-GP + TT Update Rule FID 24.8 # 20
Image Generation LSUN Bedroom 256 x 256 WGAN-GP + TT Update Rule FID 9.5 # 2