Wasserstein Gradient Penalty Loss, or WGANGP Loss, is a loss used for generative adversarial networks that augments the Wasserstein loss with a gradient norm penalty for random samples $\mathbf{\hat{x}} \sim \mathbb{P}_{\hat{\mathbf{x}}}$ to achieve Lipschitz continuity:
$$ L = \mathbb{E}_{\mathbf{\hat{x}} \sim \mathbb{P}_{g}}\left[D\left(\tilde{\mathbf{x}}\right)\right]  \mathbb{E}_{\mathbf{x} \sim \mathbb{P}_{r}}\left[D\left(\mathbf{x}\right)\right] + \lambda\mathbb{E}_{\mathbf{\hat{x}} \sim \mathbb{P}_{\hat{\mathbf{x}}}}\left[\left(\nabla_{\tilde{\mathbf{x}}}D\left(\mathbf{\tilde{x}}\right)_{2}1\right)^{2}\right]$$
It was introduced as part of the WGANGP overall model.
Source: Improved Training of Wasserstein GANsPaper  Code  Results  Date  Stars 

Task  Papers  Share 

Image Generation  12  16.22% 
Speech Synthesis  4  5.41% 
Translation  3  4.05% 
Voice Conversion  3  4.05% 
Synthetic Data Generation  3  4.05% 
Disentanglement  3  4.05% 
Audio Generation  2  2.70% 
ImagetoImage Translation  2  2.70% 
Singing Voice Synthesis  2  2.70% 
Component  Type 


🤖 No Components Found  You can add them if they exist; e.g. Mask RCNN uses RoIAlign 