Class-Splitting Generative Adversarial Networks

21 Sep 2017  ·  Guillermo L. Grinblat, Lucas C. Uzal, Pablo M. Granitto ·

Generative Adversarial Networks (GANs) produce systematically better quality samples when class label information is provided., i.e. in the conditional GAN setup. This is still observed for the recently proposed Wasserstein GAN formulation which stabilized adversarial training and allows considering high capacity network architectures such as ResNet. In this work we show how to boost conditional GAN by augmenting available class labels. The new classes come from clustering in the representation space learned by the same GAN model. The proposed strategy is also feasible when no class information is available, i.e. in the unsupervised setup. Our generated samples reach state-of-the-art Inception scores for CIFAR-10 and STL-10 datasets in both supervised and unsupervised setup.

PDF Abstract

Datasets


Results from the Paper


Ranked #8 on Conditional Image Generation on CIFAR-10 (Inception score metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Conditional Image Generation CIFAR-10 Splitting GAN Inception score 8.87 # 8
Image Generation CIFAR-10 Splitting GAN Inception score 7.90 # 57

Methods