Normalization

Weight Normalization is a normalization method for training neural networks. It is inspired by batch normalization, but it is a deterministic method that does not share batch normalization's property of adding noise to the gradients. It reparameterizes each $k$-dimentional weight vector $\textbf{w}$ in terms of a parameter vector $\textbf{v}$ and a scalar parameter $g$ and to perform stochastic gradient descent with respect to those parameters instead. Weight vectors are expressed in terms of the new parameters using:

$$ \textbf{w} = \frac{g}{\Vert\textbf{v}\Vert}\textbf{v}$$

where $\textbf{v}$ is a $k$-dimensional vector, $g$ is a scalar, and $\Vert\textbf{v}\Vert$ denotes the Euclidean norm of $\textbf{v}$. This reparameterization has the effect of fixing the Euclidean norm of the weight vector $\textbf{w}$: we now have $\Vert\textbf{w}\Vert = g$, independent of the parameters $\textbf{v}$.

Source: Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Speech Synthesis 12 10.81%
Image Classification 7 6.31%
Image Generation 6 5.41%
Quantization 6 5.41%
Decoder 4 3.60%
General Classification 4 3.60%
Model Compression 3 2.70%
BIG-bench Machine Learning 3 2.70%
Translation 3 2.70%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories