First Order Generative Adversarial Networks

GANs excel at learning high dimensional distributions, but they can update generator parameters in directions that do not correspond to the steepest descent direction of the objective. Prominent examples of problematic update directions include those used in both Goodfellow's original GAN and the WGAN-GP. To formally describe an optimal update direction, we introduce a theoretical framework which allows the derivation of requirements on both the divergence and corresponding method for determining an update direction, with these requirements guaranteeing unbiased mini-batch updates in the direction of steepest descent. We propose a novel divergence which approximates the Wasserstein distance while regularizing the critic's first order information. Together with an accompanying update direction, this divergence fulfills the requirements for unbiased steepest descent updates. We verify our method, the First Order GAN, with image generation on CelebA, LSUN and CIFAR-10 and set a new state of the art on the One Billion Word language generation task. Code to reproduce experiments is available.

PDF Abstract ICML 2018 PDF ICML 2018 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Generation CIFAR-10 FOGAN FID 27.4 # 131
Image Generation LSUN Bedroom 64 x 64 FOGAN FID 11.4 # 2

Methods