Channel-wise Soft Attention is an attention mechanism in computer vision that assigns "soft" attention weights for each channel $c$. In soft channel-wise attention, the alignment weights are learned and placed "softly" over each channel. This would contrast with hard attention which would only selects one channel to attend to at a time.
Image: Xu et al
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Semantic Segmentation | 11 | 11.22% |
Object Detection | 7 | 7.14% |
Image Classification | 6 | 6.12% |
Object | 4 | 4.08% |
Instance Segmentation | 4 | 4.08% |
Decoder | 3 | 3.06% |
Image Segmentation | 2 | 2.04% |
Super-Resolution | 2 | 2.04% |
Lesion Segmentation | 2 | 2.04% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |