Residual Blocks are skip-connection blocks that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. They were introduced as part of the ResNet architecture.
Formally, denoting the desired underlying mapping as $\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\mathcal{F}({x}):=\mathcal{H}({x})-{x}$. The original mapping is recast into $\mathcal{F}({x})+{x}$. The $\mathcal{F}({x})$ acts like a residual, hence the name 'residual block'.
The intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers. Having skip connections allows the network to more easily learn identity-like mappings.
Note that in practice, Bottleneck Residual Blocks are used for deeper ResNets, such as ResNet-50 and ResNet-101, as these bottleneck blocks are less computationally intensive.
Source: Deep Residual Learning for Image RecognitionPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Classification | 33 | 4.22% |
Self-Supervised Learning | 33 | 4.22% |
Image Generation | 27 | 3.45% |
Semantic Segmentation | 23 | 2.94% |
Classification | 22 | 2.81% |
Image-to-Image Translation | 21 | 2.69% |
Translation | 18 | 2.30% |
Super-Resolution | 15 | 1.92% |
Deep Learning | 14 | 1.79% |
Component | Type |
|
---|---|---|
1x1 Convolution
|
Convolutions | (optional) |
Batch Normalization
|
Normalization | |
Convolution
|
Convolutions | |
ReLU
|
Activation Functions | |
Residual Connection
|
Skip Connections |