Local Response Normalization

Introduced by Krizhevsky et al. in ImageNet Classification with Deep Convolutional Neural Networks

Local Response Normalization is a normalization layer that implements the idea of lateral inhibition. Lateral inhibition is a concept in neurobiology that refers to the phenomenon of an excited neuron inhibiting its neighbours: this leads to a peak in the form of a local maximum, creating contrast in that area and increasing sensory perception. In practice, we can either normalize within the same channel or normalize across channels when we apply LRN to convolutional neural networks.

$$ b_{c} = a_{c}\left(k + \frac{\alpha}{n}\sum_{c'=\max(0, c-n/2)}^{\min(N-1,c+n/2)}a_{c'}^2\right)^{-\beta} $$

Where the size is the number of neighbouring channels used for normalization, $\alpha$ is multiplicative factor, $\beta$ an exponent and $k$ an additive factor

Source: ImageNet Classification with Deep Convolutional Neural Networks


Paper Code Results Date Stars


Task Papers Share
General Classification 68 13.31%
Image Classification 48 9.39%
Quantization 45 8.81%
Classification 35 6.85%
Object Detection 33 6.46%
Object Recognition 24 4.70%
Model Compression 13 2.54%
Clustering 10 1.96%
Network Pruning 9 1.76%


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign