The Cross-Attention module is an attention module used in CrossViT for fusion of multi-scale features. The CLS token of the large branch (circle) serves as a query token to interact with the patch tokens from the small branch through attention. $f\left(·\right)$ and $g\left(·\right)$ are projections to align dimensions. The small branch follows the same procedure but swaps CLS and patch tokens from another branch.
Source: CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image ClassificationPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Semantic Segmentation | 5 | 6.41% |
Autonomous Driving | 3 | 3.85% |
Image Classification | 3 | 3.85% |
Image Retrieval | 2 | 2.56% |
Object Detection | 2 | 2.56% |
Image Segmentation | 2 | 2.56% |
Medical Image Segmentation | 2 | 2.56% |
Multi-Task Learning | 2 | 2.56% |
Machine Translation | 2 | 2.56% |