Generalized Loss-Sensitive Adversarial Learning with Manifold Margins

ECCV 2018  ·  Marzieh Edraki, Guo-Jun Qi ·

The classic Generative Adversarial Net and its variants can be roughly categorized into two large families: the unregularized ver- sus regularized GANs. By relaxing the non-parametric assumption on the discriminator in the classic GAN, the regularized GANs have better generalization ability to produce new samples drawn from the real dis- tribution. It is well known that the real data like natural images are not uniformly distributed over the whole data space. Instead, they are often restricted to a low-dimensional manifold of the ambient space. Such a manifold assumption suggests the distance over the manifold should be a better measure to characterize the distinct between real and fake sam- ples. Thus, we define a pullback operator to map samples back to their data manifold, and a manifold margin is defined as the distance between the pullback representations to distinguish between real and fake sam- ples and learn the optimal generators. We justify the effectiveness of the proposed model both theoretically and empirically.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods