Normalization

Weight Standardization is a normalization technique that smooths the loss landscape by standardizing the weights in convolutional layers. Different from the previous normalization methods that focus on activations, WS considers the smoothing effects of weights more than just length-direction decoupling. Theoretically, WS reduces the Lipschitz constants of the loss and the gradients. Hence, WS smooths the loss landscape and improves training.

In Weight Standardization, instead of directly optimizing the loss $\mathcal{L}$ on the original weights $\hat{W}$, we reparameterize the weights $\hat{W}$ as a function of $W$, i.e. $\hat{W}=\text{WS}(W)$, and optimize the loss $\mathcal{L}$ on $W$ by SGD:

$$ \hat{W} = \Big[ \hat{W}_{i,j}~\big|~ \hat{W}_{i,j} = \dfrac{W_{i,j} - \mu_{W_{i,\cdot}}}{\sigma_{W_{i,\cdot}+\epsilon}}\Big] $$

$$ y = \hat{W}*x $$

where

$$ \mu_{W_{i,\cdot}} = \dfrac{1}{I}\sum_{j=1}^{I}W_{i, j},~~\sigma_{W_{i,\cdot}}=\sqrt{\dfrac{1}{I}\sum_{i=1}^I(W_{i,j} - \mu_{W_{i,\cdot}})^2} $$

Similar to Batch Normalization, WS controls the first and second moments of the weights of each output channel individually in convolutional layers. Note that many initialization methods also initialize the weights in some similar ways. Different from those methods, WS standardizes the weights in a differentiable way which aims to normalize gradients during back-propagation. Note that we do not have any affine transformation on $\hat{W}$. This is because we assume that normalization layers such as BN or GN will normalize this convolutional layer again.

Source: Micro-Batch Training with Batch-Channel Normalization and Weight Standardization

Papers


Paper Code Results Date Stars

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories