A Bottleneck Transformer Block is a block used in Bottleneck Transformers that replaces the spatial 3 × 3 convolution layer in a Residual Block with Multi-Head Self-Attention (MHSA).
Source: Bottleneck Transformers for Visual RecognitionPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
whole slide images | 2 | 11.11% |
Instance Segmentation | 2 | 11.11% |
Anomaly Detection In Surveillance Videos | 1 | 5.56% |
Optical Flow Estimation | 1 | 5.56% |
Classification | 1 | 5.56% |
Emotion Recognition | 1 | 5.56% |
Self-Supervised Learning | 1 | 5.56% |
Graph Neural Network | 1 | 5.56% |
Semantic Segmentation | 1 | 5.56% |
Component | Type |
|
---|---|---|
![]() |
Convolutions | |
![]() |
Attention Mechanisms | |
![]() |
Attention Modules | |
![]() |
Convolutions | |
![]() |
Skip Connections |