Attention Mechanisms

Locally-Grouped Self-Attention

Introduced by Chu et al. in Twins: Revisiting the Design of Spatial Attention in Vision Transformers

Locally-Grouped Self-Attention, or LSA, is a local attention mechanism used in the Twins-SVT architecture. Locally-grouped self-attention (LSA). Motivated by the group design in depthwise convolutions for efficient inference, we first equally divide the 2D feature maps into sub-windows, making self-attention communications only happen within each sub-window. This design also resonates with the multi-head design in self-attention, where the communications only occur within the channels of the same head. To be specific, the feature maps are divided into $m \times n$ sub-windows. Without loss of generality, we assume $H \% m=0$ and $W \% n=0$. Each group contains $\frac{H W}{m n}$ elements, and thus the computation cost of the self-attention in this window is $\mathcal{O}\left(\frac{H^{2} W^{2}}{m^{2} n^{2}} d\right)$, and the total cost is $\mathcal{O}\left(\frac{H^{2} W^{2}}{m n} d\right)$. If we let $k_{1}=\frac{H}{n}$ and $k_{2}=\frac{W}{n}$, the cost can be computed as $\mathcal{O}\left(k_{1} k_{2} H W d\right)$, which is significantly more efficient when $k_{1} \ll H$ and $k_{2} \ll W$ and grows linearly with $H W$ if $k_{1}$ and $k_{2}$ are fixed.

Although the locally-grouped self-attention mechanism is computation friendly, the image is divided into non-overlapping sub-windows. Thus, we need a mechanism to communicate between different sub-windows, as in Swin. Otherwise, the information would be limited to be processed locally, which makes the receptive field small and significantly degrades the performance as shown in our experiments. This resembles the fact that we cannot replace all standard convolutions by depth-wise convolutions in CNNs.

Source: Twins: Revisiting the Design of Spatial Attention in Vision Transformers

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Benchmarking 1 20.00%
Fact Verification 1 20.00%
Retrieval 1 20.00%
Image Classification 1 20.00%
Semantic Segmentation 1 20.00%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories