Self-Attention Network (SANet) proposes two variations of self-attention used for image recognition: 1) pairwise self-attention which generalizes standard dot-product attention and is fundamentally a set operator, and 2) patchwise self-attention which is strictly more powerful than convolution.
Source: Exploring Self-attention for Image RecognitionPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Semantic Segmentation | 2 | 10.53% |
Super-Resolution | 2 | 10.53% |
Style Transfer | 2 | 10.53% |
Crowd Counting | 1 | 5.26% |
Scene Recognition | 1 | 5.26% |
Real-Time Semantic Segmentation | 1 | 5.26% |
Self-Driving Cars | 1 | 5.26% |
Image Dehazing | 1 | 5.26% |
Image Restoration | 1 | 5.26% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |