Sub-GAN: An Unsupervised Generative Model via Subspaces

The recent years have witnessed significant growth in constructing robust generative models to capture informative distributions of natural data. However, it is difficult to fully exploit the distribution of complex data, like images and videos, due to the high dimensionality of ambient space. Sequentially, how to effectively guide the training of generative models is a crucial issue. In this paper, we present a subspace-based generative adversarial network (Sub-GAN) which simultaneously disentangles multiple latent subspaces and generates diverse samples correspondingly. Since the high-dimensional natural data usually lies on a union of low-dimensional subspaces which contain semantically extensive structure, Sub-GAN incorporates a novel clusterer that can interact with the generator and discriminator via subspace information. Unlike the traditional generative models, the proposed Sub-GAN can control the diversity of the generated samples via the multiplicity of the learned subspaces. Moreover, the Sub-GAN follows an unsupervised fashion to explore not only the visual classes but the latent continuous attributes. We demonstrate that our model can discover meaningful visual attributes which is hard to be annotated via strong supervision, e.g., the writing style of digits, thus avoid the mode collapse problem. Extensive experimental results show the competitive performance of the proposed method for both generating diverse images with satisfied quality and discovering discriminative latent subspaces.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here