Channel-wise Soft Attention is an attention mechanism in computer vision that assigns "soft" attention weights for each channel $c$. In soft channel-wise attention, the alignment weights are learned and placed "softly" over each channel. This would contrast with hard attention which would only selects one channel to attend to at a time.
Image: Xu et al
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Classification | 6 | 10.53% |
Semantic Segmentation | 5 | 8.77% |
Object Detection | 4 | 7.02% |
Instance Segmentation | 4 | 7.02% |
Point Cloud Completion | 2 | 3.51% |
Fake News Detection | 1 | 1.75% |
3D Classification | 1 | 1.75% |
Classification | 1 | 1.75% |
Object Detection In Aerial Images | 1 | 1.75% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |