A Global Context Block is an image model block for global context modeling. The aim is to have both the benefits of the simplified non-local block with effective modeling of long-range dependencies, and the squeeze-excitation block with lightweight computation.
In the Global Context framework, we have (a) global attention pooling, which adopts a 1x1 convolution $W_{k}$ and softmax function to obtain the attention weights, and then performs the attention pooling to obtain the global context features, (b) feature transform via a 1x1 convolution $W_{v}$; (c) feature aggregation, which employs addition to aggregate the global context features to the features of each position. Taken as a whole, the GC block is proposed as a lightweight way to achieve global context modeling.
Source: GCNet: Non-local Networks Meet Squeeze-Excitation Networks and BeyondPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Object Detection | 3 | 13.64% |
Decoder | 2 | 9.09% |
Stereo Matching | 2 | 9.09% |
Instance Segmentation | 2 | 9.09% |
Real-Time Semantic Segmentation | 1 | 4.55% |
Semantic Segmentation | 1 | 4.55% |
Graph Neural Network | 1 | 4.55% |
Prediction | 1 | 4.55% |
Point Cloud Registration | 1 | 4.55% |
Component | Type |
|
---|---|---|
![]() |
Convolutions | |
![]() |
Normalization | |
![]() |
Activation Functions | |
![]() |
Skip Connections | |
![]() |
Output Functions |