Normalization

Weight Demodulation

Introduced by Karras et al. in Analyzing and Improving the Image Quality of StyleGAN

Weight Modulation is an alternative to adaptive instance normalization for use in generative adversarial networks, specifically it is introduced in StyleGAN2. The purpose of instance normalization is to remove the effect of $s$ - the scales of the features maps - from the statistics of the convolution’s output feature maps. Weight modulation tries to achieve this goal more directly. Assuming that input activations are i.i.d. random variables with unit standard deviation. After modulation and convolution, the output activations have standard deviation of:

$$ \sigma_{j} = \sqrt{{\sum_{i,k}w_{ijk}'}^{2}} $$

i.e., the outputs are scaled by the $L_{2}$ norm of the corresponding weights. The subsequent normalization aims to restore the outputs back to unit standard deviation. This can be achieved if we scale (“demodulate”) each output feature map $j$ by $1/\sigma_{j}$ . Alternatively, we can again bake this into the convolution weights:

$$ w''_{ijk} = w'_{ijk} / \sqrt{{\sum_{i, k}w'_{ijk}}^{2} + \epsilon} $$

where $\epsilon$ is a small constant to avoid numerical issues.

Source: Analyzing and Improving the Image Quality of StyleGAN

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Image Generation 49 19.52%
Disentanglement 13 5.18%
Image Manipulation 13 5.18%
Face Generation 11 4.38%
Translation 7 2.79%
Face Recognition 7 2.79%
Conditional Image Generation 7 2.79%
Face Swapping 6 2.39%
Domain Adaptation 6 2.39%

Components


Component Type
Convolution
Convolutions

Categories