Activation Normalization is a type of normalization used for flow-based generative models; specifically it was introduced in the GLOW architecture. An ActNorm layer performs an affine transformation of the activations using a scale and bias parameter per channel, similar to batch normalization. These parameters are initialized such that the post-actnorm activations per-channel have zero mean and unit variance given an initial minibatch of data. This is a form of data dependent initilization. After initialization, the scale and bias are treated as regular trainable parameters that are independent of the data.
Source: Glow: Generative Flow with Invertible 1x1 ConvolutionsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Flare Removal | 3 | 7.50% |
Image Enhancement | 3 | 7.50% |
Federated Learning | 2 | 5.00% |
Denoising | 2 | 5.00% |
Low-Light Image Enhancement | 2 | 5.00% |
Zero-Shot Learning | 2 | 5.00% |
Image Dehazing | 2 | 5.00% |
Text to Speech | 2 | 5.00% |
Intrusion Detection | 1 | 2.50% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |