Regularization

LayerScale

Introduced by Touvron et al. in Going deeper with Image Transformers

LayerScale is a method used for vision transformer architectures to help improve training dynamics. It adds a learnable diagonal matrix on output of each residual block, initialized close to (but not at) 0. Adding this simple layer after each residual block improves the training dynamic, allowing for the training of deeper high-capacity image transformers that benefit from depth.

Specifically, LayerScale is a per-channel multiplication of the vector produced by each residual block, as opposed to a single scalar, see Figure (d). The objective is to group the updates of the weights associated with the same output channel. Formally, LayerScale is a multiplication by a diagonal matrix on output of each residual block. In other words:

$$ x_{l}^{\prime} =x_{l}+\operatorname{diag}\left(\lambda_{l, 1}, \ldots, \lambda_{l, d}\right) \times \operatorname{SA}\left(\eta\left(x_{l}\right)\right) $$

$$ x_{l+1} =x_{l}^{\prime}+\operatorname{diag}\left(\lambda_{l, 1}^{\prime}, \ldots, \lambda_{l, d}^{\prime}\right) \times \operatorname{FFN}\left(\eta\left(x_{l}^{\prime}\right)\right) $$

where the parameters $\lambda_{l, i}$ and $\lambda_{l, i}^{\prime}$ are learnable weights. The diagonal values are all initialized to a fixed small value $\varepsilon:$ we set it to $\varepsilon=0.1$ until depth 18 , $\varepsilon=10^{-5}$ for depth 24 and $\varepsilon=10^{-6}$ for deeper networks.

This formula is akin to other normalization strategies ActNorm or LayerNorm but executed on output of the residual block. Yet LayerScale seeks a different effect: ActNorm is a data-dependent initialization that calibrates activations so that they have zero-mean and unit variance, like BatchNorm. In contrast, in LayerScale, we initialize the diagonal with small values so that the initial contribution of the residual branches to the function implemented by the transformer is small. In that respect the motivation is therefore closer to that of ReZero, SkipInit, Fixup and T-Fixup: to train closer to the identity function and let the network integrate the additional parameters progressively during the training. LayerScale offers more diversity in the optimization than just adjusting the whole layer by a single learnable scalar as in ReZero/SkipInit, Fixup and T-Fixup.

Source: Going deeper with Image Transformers

Papers


Paper Code Results Date Stars

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories