Weight Normalization is a normalization method for training neural networks. It is inspired by batch normalization, but it is a deterministic method that does not share batch normalization's property of adding noise to the gradients. It reparameterizes each weight vector $\textbf{w}$ in terms of a parameter vector $\textbf{v}$ and a scalar parameter $g$ and to perform stochastic gradient descent with respect to those parameters instead. Weight vectors are expressed in terms of the new parameters using:
$$ \textbf{w} = \frac{g}{\Vert\textbf{v}\Vert}\textbf{v}$$
where $\textbf{v}$ is a $k$dimensional vector, $g$ is a scalar, and $\Vert\textbf{v}\Vert$ denotes the Euclidean norm of $\textbf{v}$. This reparameterization has the effect of fixing the Euclidean norm of the weight vector $\textbf{w}$: we now have $\Vert\textbf{w}\Vert = g$, independent of the parameters $\textbf{v}$.
Source: Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural NetworksPaper  Code  Results  Date  Stars 

Task  Papers  Share 

Speech Synthesis  12  15.19% 
Image Classification  6  7.59% 
Image Generation  5  6.33% 
General Classification  4  5.06% 
Model Compression  3  3.80% 
BIGbench Machine Learning  3  3.80% 
Quantization  3  3.80% 
Machine Translation  3  3.80% 
Fairness  2  2.53% 
Component  Type 


🤖 No Components Found  You can add them if they exist; e.g. Mask RCNN uses RoIAlign 