Wasserstein GAN + Gradient Penalty, or WGAN-GP, is a generative adversarial network that uses the Wasserstein loss formulation plus a gradient norm penalty to achieve Lipschitz continuity.
The original WGAN uses weight clipping to achieve 1-Lipschitz functions, but this can lead to undesirable behaviour by creating pathological value surfaces and capacity underuse, as well as gradient explosion/vanishing without careful tuning of the weight clipping parameter $c$.
A Gradient Penalty is a soft version of the Lipschitz constraint, which follows from the fact that functions are 1-Lipschitz iff the gradients are of norm at most 1 everywhere. The squared difference from norm 1 is used as the gradient penalty.
Source: Improved Training of Wasserstein GANsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Generation | 3 | 15.00% |
Exposure Fairness | 1 | 5.00% |
Fairness | 1 | 5.00% |
Recommendation Systems | 1 | 5.00% |
Decision Making | 1 | 5.00% |
Disentanglement | 1 | 5.00% |
Image Super-Resolution | 1 | 5.00% |
Super-Resolution | 1 | 5.00% |
Music Generation | 1 | 5.00% |
Component | Type |
|
---|---|---|
![]() |
Normalization | |
![]() |
Convolutions | |
![]() |
Normalization | |
![]() |
Activation Functions | |
![]() |
Loss Functions |