Batch Normalization aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for Dropout.
We apply a batch normalization layer as follows for a minibatch $\mathcal{B}$:
$$ \mu_{\mathcal{B}} = \frac{1}{m}\sum^{m}_{i=1}x_{i} $$
$$ \sigma^{2}_{\mathcal{B}} = \frac{1}{m}\sum^{m}_{i=1}\left(x_{i}\mu_{\mathcal{B}}\right)^{2} $$
$$ \hat{x}_{i} = \frac{x_{i}  \mu_{\mathcal{B}}}{\sqrt{\sigma^{2}_{\mathcal{B}}+\epsilon}} $$
$$ y_{i} = \gamma\hat{x}_{i} + \beta = \text{BN}_{\gamma, \beta}\left(x_{i}\right) $$
Where $\gamma$ and $\beta$ are learnable parameters.
Source: Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate ShiftPaper  Code  Results  Date  Stars 

Task  Papers  Share 

Image Classification  55  8.06% 
Object Detection  50  7.33% 
Semantic Segmentation  41  6.01% 
General Classification  31  4.55% 
SelfSupervised Learning  20  2.93% 
Image Generation  18  2.64% 
Quantization  18  2.64% 
Instance Segmentation  13  1.91% 
Domain Adaptation  13  1.91% 
Component  Type 


🤖 No Components Found  You can add them if they exist; e.g. Mask RCNN uses RoIAlign 