Residual Blocks are skipconnection blocks that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. They were introduced as part of the ResNet architecture.
Formally, denoting the desired underlying mapping as $\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\mathcal{F}({x}):=\mathcal{H}({x}){x}$. The original mapping is recast into $\mathcal{F}({x})+{x}$. The $\mathcal{F}({x})$ acts like a residual, hence the name 'residual block'.
The intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers. Having skip connections allows the network to more easily learn identitylike mappings.
Note that in practice, Bottleneck Residual Blocks are used for deeper ResNets, such as ResNet50 and ResNet101, as these bottleneck blocks are less computationally intensive.
Source: Deep Residual Learning for Image RecognitionPaper  Code  Results  Date  Stars 

Task  Papers  Share 

Image Classification  48  7.69% 
SelfSupervised Learning  42  6.73% 
Semantic Segmentation  23  3.69% 
Object Detection  19  3.04% 
ImagetoImage Translation  17  2.72% 
SuperResolution  14  2.24% 
Image Generation  14  2.24% 
Quantization  13  2.08% 
Knowledge Distillation  13  2.08% 
Component  Type 


1x1 Convolution

Convolutions  (optional) 
Batch Normalization

Normalization  
Convolution

Convolutions  
ReLU

Activation Functions  
Residual Connection

Skip Connections 