Residual Networks, or ResNets, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack residual blocks ontop of each other to form network: e.g. a ResNet-50 has fifty layers using these blocks.
Formally, denoting the desired underlying mapping as $\mathcal{H}(x)$, we let the stacked nonlinear layers fit another mapping of $\mathcal{F}(x):=\mathcal{H}(x)-x$. The original mapping is recast into $\mathcal{F}(x)+x$.
There is empirical evidence that these types of network are easier to optimize, and can gain accuracy from considerably increased depth.
Source: Deep Residual Learning for Image RecognitionPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Classification | 70 | 9.68% |
Self-Supervised Learning | 59 | 8.16% |
Classification | 32 | 4.43% |
Semantic Segmentation | 30 | 4.15% |
Object Detection | 20 | 2.77% |
Quantization | 14 | 1.94% |
Image Generation | 10 | 1.38% |
Federated Learning | 9 | 1.24% |
Autonomous Driving | 9 | 1.24% |