InvertGAN: Reducing mode collapse with multi-dimensional Gaussian Inversion

1 Jan 2021  ·  Liangliang Shi, Yang Li, Junchi Yan ·

Generative adversarial networks have shown their ability in capturing high-dimensional complex distributions and generating realistic data samples e.g. images. However, existing models still have difficulty in handling multi-modal outputs, which are often susceptible to mode collapse in the sense that the generator can only map latent variables to a part of modes of the target distribution. In this paper, we analyze the typical cases of mode collapse and define the concept of mode completeness in the view of probability measure. We further prove that the inverse mapping can play an effective role to mitigate mode collapse. Under this framework, we further adopt the multi-dimensional Gaussian loss instead of one-dimensional one that has been widely used in existing work, to generate diverse images. Our experiments on synthetic data as well as real-word images show the superiority of our model. Source code will be released with the final paper.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here