The Cross-Attention module is an attention module used in CrossViT for fusion of multi-scale features. The CLS token of the large branch (circle) serves as a query token to interact with the patch tokens from the small branch through attention. $f\left(·\right)$ and $g\left(·\right)$ are projections to align dimensions. The small branch follows the same procedure but swaps CLS and patch tokens from another branch.
Source: CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image ClassificationPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Decoder | 14 | 6.80% |
Semantic Segmentation | 10 | 4.85% |
Object Detection | 7 | 3.40% |
Retrieval | 6 | 2.91% |
Image Classification | 6 | 2.91% |
Autonomous Driving | 5 | 2.43% |
Image Generation | 4 | 1.94% |
Image Super-Resolution | 4 | 1.94% |
Super-Resolution | 4 | 1.94% |