Normalization

# Local Response Normalization

Introduced by Krizhevsky et al. in ImageNet Classification with Deep Convolutional Neural Networks

Local Response Normalization is a normalization layer that implements the idea of lateral inhibition. Lateral inhibition is a concept in neurobiology that refers to the phenomenon of an excited neuron inhibiting its neighbours: this leads to a peak in the form of a local maximum, creating contrast in that area and increasing sensory perception. In practice, we can either normalize within the same channel or normalize across channels when we apply LRN to convolutional neural networks.

$$b_{c} = a_{c}\left(k + \frac{\alpha}{n}\sum_{c'=\max(0, c-n/2)}^{\min(N-1,c+n/2)}a_{c'}^2\right)^{-\beta}$$

Where the size is the number of neighbouring channels used for normalization, $\alpha$ is multiplicative factor, $\beta$ an exponent and $k$ an additive factor

#### Papers

Paper Code Results Date Stars

General Classification 68 15.35%
Image Classification 46 10.38%
Quantization 43 9.71%
Object Detection 33 7.45%
Object Recognition 24 5.42%
Model Compression 13 2.93%
Arithmetic 10 2.26%
Network Pruning 9 2.03%
Semantic Segmentation 6 1.35%

#### Components

Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign