StyleGAN is a type of generative adversarial network. It uses an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature; in particular, the use of adaptive instance normalization. Otherwise it follows Progressive GAN in using a progressively growing training regime. Other quirks include the fact it generates from a fixed value tensor not stochastically generated latent variables as in regular GANs. The stochastically generated latent variables are used as style vectors in the adaptive instance normalization at each resolution after being transformed by an 8-layer feedforward network. Lastly, it employs a form of regularization called mixing regularization, which mixes two style latent variables during training.
Source: A Style-Based Generator Architecture for Generative Adversarial NetworksPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Generation | 65 | 15.70% |
Disentanglement | 28 | 6.76% |
Face Generation | 17 | 4.11% |
Image Manipulation | 16 | 3.86% |
Face Recognition | 14 | 3.38% |
Face Swapping | 13 | 3.14% |
Decoder | 12 | 2.90% |
Diversity | 10 | 2.42% |
Super-Resolution | 10 | 2.42% |
Component | Type |
|
---|---|---|
Adaptive Instance Normalization
|
Normalization | |
Convolution
|
Convolutions | |
Feedforward Network
|
Feedforward Networks | |
Leaky ReLU
|
Activation Functions | |
R1 Regularization
|
Regularization |