Convolutional Neural Networks

Residual Network

Introduced by He et al. in Deep Residual Learning for Image Recognition

Residual Networks, or ResNets, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack residual blocks ontop of each other to form network: e.g. a ResNet-50 has fifty layers using these blocks.

Formally, denoting the desired underlying mapping as $\mathcal{H}(x)$, we let the stacked nonlinear layers fit another mapping of $\mathcal{F}(x):=\mathcal{H}(x)-x$. The original mapping is recast into $\mathcal{F}(x)+x$.

There is empirical evidence that these types of network are easier to optimize, and can gain accuracy from considerably increased depth.

Source: Deep Residual Learning for Image Recognition


Paper Code Results Date Stars


Task Papers Share
Image Classification 83 11.76%
General Classification 61 8.64%
Semantic Segmentation 40 5.67%
Object Detection 39 5.52%
Self-Supervised Learning 20 2.83%
Instance Segmentation 17 2.41%
Quantization 17 2.41%
Action Recognition 11 1.56%
Clustering 10 1.42%