Residual Networks, or ResNets, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack residual blocks ontop of each other to form network: e.g. a ResNet-50 has fifty layers using these blocks.
Formally, denoting the desired underlying mapping as $\mathcal{H}(x)$, we let the stacked nonlinear layers fit another mapping of $\mathcal{F}(x):=\mathcal{H}(x)-x$. The original mapping is recast into $\mathcal{F}(x)+x$.
There is empirical evidence that these types of network are easier to optimize, and can gain accuracy from considerably increased depth.
Source: Deep Residual Learning for Image RecognitionPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Classification | 74 | 12.33% |
Self-Supervised Learning | 42 | 7.00% |
Semantic Segmentation | 29 | 4.83% |
Object Detection | 23 | 3.83% |
General Classification | 15 | 2.50% |
Quantization | 13 | 2.17% |
Knowledge Distillation | 10 | 1.67% |
Domain Adaptation | 9 | 1.50% |
Speaker Verification | 8 | 1.33% |