Initialization

Kaiming Initialization, or He Initialization, is an initialization method for neural networks that takes into account the non-linearity of activation functions, such as ReLU activations.

A proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially. Using a derivation they work out that the condition to stop this happening is:

$$\frac{1}{2}n_{l}\text{Var}\left[w_{l}\right] = 1 $$

This implies an initialization scheme of:

$$ w_{l} \sim \mathcal{N}\left(0, 2/n_{l}\right)$$

That is, a zero-centered Gaussian with standard deviation of $\sqrt{2/{n}_{l}}$ (variance shown in equation above). Biases are initialized at $0$.

Source: Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Image Classification 47 7.37%
Self-Supervised Learning 40 6.27%
Semantic Segmentation 31 4.86%
Classification 24 3.76%
Image Segmentation 15 2.35%
Object Detection 13 2.04%
Quantization 11 1.72%
Decoder 9 1.41%
Denoising 8 1.25%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories