Regularization

R1 Regularization

Introduced by Mescheder et al. in Which Training Methods for GANs do actually Converge?

R_INLINE_MATH_1 Regularization is a regularization technique and gradient penalty for training generative adversarial networks. It penalizes the discriminator from deviating from the Nash Equilibrium via penalizing the gradient on real data alone: when the generator distribution produces the true data distribution and the discriminator is equal to 0 on the data manifold, the gradient penalty ensures that the discriminator cannot create a non-zero gradient orthogonal to the data manifold without suffering a loss in the GAN game.

This leads to the following regularization term:

$$ R_{1}\left(\psi\right) = \frac{\gamma}{2}E_{p_{D}\left(x\right)}\left[||\nabla{D_{\psi}\left(x\right)}||^{2}\right] $$

Source: Which Training Methods for GANs do actually Converge?

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Image Generation 122 15.54%
Disentanglement 47 5.99%
Face Generation 33 4.20%
Image Manipulation 33 4.20%
Face Recognition 25 3.18%
Diversity 24 3.06%
Image-to-Image Translation 19 2.42%
Face Swapping 18 2.29%
Decoder 18 2.29%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories