Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible. However, the convergence of GAN training has still not been proved. We propose a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions.
|Task||Dataset||Model||Metric name||Metric value||Global rank||Compare|
|Image Generation||CIFAR-10||WGAN-GP + TT Update Rule||FID||24.8||# 20|
|Image Generation||LSUN Bedroom 256 x 256||WGAN-GP + TT Update Rule||FID||9.5||# 2|