Attention Modules

Bottleneck Transformer Block

Introduced by Srinivas et al. in Bottleneck Transformers for Visual Recognition

A Bottleneck Transformer Block is a block used in Bottleneck Transformers that replaces the spatial 3 × 3 convolution layer in a Residual Block with Multi-Head Self-Attention (MHSA).

Source: Bottleneck Transformers for Visual Recognition

Papers


Paper Code Results Date Stars

Categories