Attention Mechanisms

squeeze-and-excitation networks

Introduced by Hu et al. in Squeeze-and-Excitation Networks

SENet pioneered channel attention. The core of SENet is a squeeze-and-excitation (SE) block which is used to collect global information, capture channel-wise relationships and improve representation ability. SE blocks are divided into two parts, a squeeze module and an excitation module. Global spatial information is collected in the squeeze module by global average pooling. The excitation module captures channel-wise relationships and outputs an attention vector by using fully-connected layers and non-linear layers (ReLU and sigmoid). Then, each channel of the input feature is scaled by multiplying the corresponding element in the attention vector. Overall, a squeeze-and-excitation block $F_\text{se}$ (with parameter $\theta$) which takes $X$ as input and outputs $Y$ can be formulated as: \begin{align} s = F_\text{se}(X, \theta) & = \sigma (W_{2} \delta (W_{1}\text{GAP}(X))) \end{align} \begin{align} Y = sX \end{align}

Source: Squeeze-and-Excitation Networks

Papers


Paper Code Results Date Stars

Tasks


Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories