Self-Attention Network (SANet) proposes two variations of self-attention used for image recognition: 1) pairwise self-attention which generalizes standard dot-product attention and is fundamentally a set operator, and 2) patchwise self-attention which is strictly more powerful than convolution.
Source: Exploring Self-attention for Image RecognitionPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Style Transfer | 2 | 25.00% |
Image Super-Resolution | 1 | 12.50% |
Super-Resolution | 1 | 12.50% |
Video Polyp Segmentation | 1 | 12.50% |
Camera Localization | 1 | 12.50% |
Scene Understanding | 1 | 12.50% |
Semantic Segmentation | 1 | 12.50% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |