Residual Networks, or ResNets, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack residual blocks ontop of each other to form network: e.g. a ResNet-50 has fifty layers using these blocks.
Formally, denoting the desired underlying mapping as $\mathcal{H}(x)$, we let the stacked nonlinear layers fit another mapping of $\mathcal{F}(x):=\mathcal{H}(x)-x$. The original mapping is recast into $\mathcal{F}(x)+x$.
There is empirical evidence that these types of network are easier to optimize, and can gain accuracy from considerably increased depth.
Source: Deep Residual Learning for Image RecognitionPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Classification | 52 | 8.28% |
Self-Supervised Learning | 51 | 8.12% |
Semantic Segmentation | 25 | 3.98% |
Classification | 22 | 3.50% |
Object Detection | 13 | 2.07% |
Quantization | 10 | 1.59% |
Decoder | 8 | 1.27% |
Denoising | 8 | 1.27% |
Benchmarking | 8 | 1.27% |
Component | Type |
|
---|---|---|
Batch Normalization
|
Normalization | (optional) |
Convolution
|
Convolutions | |
Global Average Pooling
|
Pooling Operations | |
Kaiming Initialization
|
Initialization | |
Max Pooling
|
Pooling Operations | |
ReLU
|
Activation Functions | (optional) |