Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

11 Feb 2015Sergey Ioffe • Christian Szegedy

Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs.

Full paper

Evaluation


Task Dataset Model Metric name Metric value Global rank Compare
Image Classification ImageNet Inception V2 Top 1 Accuracy 74.8% # 12
Image Classification ImageNet Inception V2 Top 5 Accuracy 92.2% # 12