Normalization

Local Response Normalization

Introduced by Krizhevsky et al. in ImageNet Classification with Deep Convolutional Neural Networks

Local Response Normalization is a normalization layer that implements the idea of lateral inhibition. Lateral inhibition is a concept in neurobiology that refers to the phenomenon of an excited neuron inhibiting its neighbours: this leads to a peak in the form of a local maximum, creating contrast in that area and increasing sensory perception. In practice, we can either normalize within the same channel or normalize across channels when we apply LRN to convolutional neural networks.

$$ b_{c} = a_{c}\left(k + \frac{\alpha}{n}\sum_{c'=\max(0, c-n/2)}^{\min(N-1,c+n/2)}a_{c'}^2\right)^{-\beta} $$

Where the size is the number of neighbouring channels used for normalization, $\alpha$ is multiplicative factor, $\beta$ an exponent and $k$ an additive factor

Source: ImageNet Classification with Deep Convolutional Neural Networks

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
General Classification 68 16.35%
Image Classification 46 11.06%
Quantization 42 10.10%
Object Detection 32 7.69%
Object Recognition 24 5.77%
Model Compression 13 3.13%
Object Classification 8 1.92%
Network Pruning 7 1.68%
Semantic Segmentation 6 1.44%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories