Channel-wise Soft Attention is an attention mechanism in computer vision that assigns "soft" attention weights for each channel $c$. In soft channel-wise attention, the alignment weights are learned and placed "softly" over each channel. This would contrast with hard attention which would only selects one channel to attend to at a time.
Image: Xu et al
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Object Detection | 7 | 9.72% |
Semantic Segmentation | 7 | 9.72% |
Image Classification | 6 | 8.33% |
Instance Segmentation | 4 | 5.56% |
Lesion Segmentation | 2 | 2.78% |
Point Cloud Completion | 2 | 2.78% |
Decoder | 2 | 2.78% |
Image Enhancement | 1 | 1.39% |
Long-range modeling | 1 | 1.39% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |