Generative Adversarial Networks

Wasserstein GAN (Gradient Penalty)

Introduced by Gulrajani et al. in Improved Training of Wasserstein GANs

Wasserstein GAN + Gradient Penalty, or WGAN-GP, is a generative adversarial network that uses the Wasserstein loss formulation plus a gradient norm penalty to achieve Lipschitz continuity.

The original WGAN uses weight clipping to achieve 1-Lipschitz functions, but this can lead to undesirable behaviour by creating pathological value surfaces and capacity underuse, as well as gradient explosion/vanishing without careful tuning of the weight clipping parameter $c$.

A Gradient Penalty is a soft version of the Lipschitz constraint, which follows from the fact that functions are 1-Lipschitz iff the gradients are of norm at most 1 everywhere. The squared difference from norm 1 is used as the gradient penalty.

Source: Improved Training of Wasserstein GANs

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Image Generation 3 13.04%
Diversity 2 8.70%
Exposure Fairness 1 4.35%
Fairness 1 4.35%
Recommendation Systems 1 4.35%
Decision Making 1 4.35%
Disentanglement 1 4.35%
Image Super-Resolution 1 4.35%
Super-Resolution 1 4.35%

Categories