Channel-wise Soft Attention is an attention mechanism in computer vision that assigns "soft" attention weights for each channel $c$. In soft channel-wise attention, the alignment weights are learned and placed "softly" over each channel. This would contrast with hard attention which would only selects one channel to attend to at a time.
Image: Xu et al
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Classification | 6 | 12.77% |
Semantic Segmentation | 4 | 8.51% |
Instance Segmentation | 4 | 8.51% |
Object Detection | 3 | 6.38% |
Speaker Verification | 1 | 2.13% |
Denoising | 1 | 2.13% |
Image Denoising | 1 | 2.13% |
Image Restoration | 1 | 2.13% |
Knowledge Distillation | 1 | 2.13% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |