MSG-GAN: Multi-Scale Gradients for Generative Adversarial Networks

CVPR 2020  ·  Animesh Karnewar, Oliver Wang ·

While Generative Adversarial Networks (GANs) have seen huge successes in image synthesis tasks, they are notoriously difficult to adapt to different datasets, in part due to instability during training and sensitivity to hyperparameters. One commonly accepted reason for this instability is that gradients passing from the discriminator to the generator become uninformative when there isn't enough overlap in the supports of the real and fake distributions. In this work, we propose the Multi-Scale Gradient Generative Adversarial Network (MSG-GAN), a simple but effective technique for addressing this by allowing the flow of gradients from the discriminator to the generator at multiple scales. This technique provides a stable approach for high resolution image synthesis, and serves as an alternative to the commonly used progressive growing technique. We show that MSG-GAN converges stably on a variety of image datasets of different sizes, resolutions and domains, as well as different types of loss functions and architectures, all with the same set of fixed hyperparameters. When compared to state-of-the-art GANs, our approach matches or exceeds the performance in most of the cases we tried.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Generation CelebA-HQ 1024x1024 MSG-StyleGAN FID 6.37 # 5
Image Generation CIFAR-10 MSG-ProGAN Inception score 7.92 # 56
Image Generation FFHQ 1024 x 1024 MSG-StyleGAN FID 5.8 # 14
Image Generation Indian Celebs 256 x 256 MSG-StyleGAN FID 28.44 # 1
Image Generation LSUN Churches 256 x 256 MSG-StyleGAN FID 5.2 # 17
Clean-FID (trainfull) 5.38 ± 0.03 # 4
Image Generation Oxford 102 Flowers 256 x 256 MSG-StyleGAN FID 19.60 # 2

Methods


No methods listed for this paper. Add relevant methods here