Progressive Growing of GANs for Improved Quality, Stability, and Variation

We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.

PDF Abstract ICLR 2018 PDF ICLR 2018 Abstract

Datasets


Introduced in the Paper:

CelebA-HQ

Used in the Paper:

CIFAR-10 LSUN

Results from the Paper


Ranked #4 on Image Generation on LSUN Horse 256 x 256 (Clean-FID (trainfull) metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Generation CelebA-HQ 1024x1024 PGGAN FID 7.3 # 7
Image Generation CIFAR-10 PGGAN Inception score 8.8 # 33
Image Generation LSUN Bedroom 256 x 256 PGGAN FID 8.34 # 17
Image Generation LSUN Cat 256 x 256 PGGAN FID 37.52 # 7
Clean-FID (trainfull) 38.35 ± 0.32 # 4
Image Generation LSUN Churches 256 x 256 PGGAN FID 6.42 # 20
Clean-FID (trainfull) 6.43 ± 0.05 # 5
Image Generation LSUN Horse 256 x 256 PGGAN Clean-FID (trainfull) 14.09 ± 0.06 # 4

Methods