Taking Control of Intra-class Variation in Conditional GANs Under Weak Supervision

27 Nov 2018  ·  Richard T. Marriott, Sami Romdhani, Liming Chen ·

Generative Adversarial Networks (GANs) are able to learn mappings between simple, relatively low-dimensional, random distributions and points on the manifold of realistic images in image-space. The semantics of this mapping, however, are typically entangled such that meaningful image properties cannot be controlled independently of one another. Conditional GANs (cGANs) provide a potential solution to this problem, allowing specific semantics to be enforced during training. This solution, however, depends on the availability of precise labels, which are sometimes difficult or near impossible to obtain, e.g. labels representing lighting conditions or describing the background. In this paper we introduce a new formulation of the cGAN that is able to learn disentangled, multivariate models of semantically meaningful variation and which has the advantage of requiring only the weak supervision of binary attribute labels. For example, given only labels of ambient / non-ambient lighting, our method is able to learn multivariate lighting models disentangled from other factors such as the identity and pose. We coin the method intra-class variation isolation (IVI) and the resulting network the IVI-GAN. We evaluate IVI-GAN on the CelebA dataset and on synthetic 3D morphable model data, learning to disentangle attributes such as lighting, pose, expression, and even the background.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here