Ensembles are a popular way to improve results of discriminative CNNs. The
combination of several networks trained starting from different initializations
improves results significantly...
In this paper we investigate the usage of
ensembles of GANs. The specific nature of GANs opens up several new ways to
construct ensembles. The first one is based on the fact that in the minimax
game which is played to optimize the GAN objective the generator network keeps
on changing even after the network can be considered optimal. As such ensembles
of GANs can be constructed based on the same network initialization but just
taking models which have different amount of iterations. These so-called self
ensembles are much faster to train than traditional ensembles. The second
method, called cascade GANs, redirects part of the training data which is badly
modeled by the first GAN to another GAN. In experiments on the CIFAR10 dataset
we show that ensembles of GANs obtain model probability distributions which
better model the data distribution. In addition, we show that these improved
results can be obtained at little additional computational cost.