Kaiming Initialization, or He Initialization, is an initialization method for neural networks that takes into account the non-linearity of activation functions, such as ReLU activations.
A proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially. Using a derivation they work out that the condition to stop this happening is:
$$\frac{1}{2}n_{l}\text{Var}\left[w_{l}\right] = 1 $$
This implies an initialization scheme of:
$$ w_{l} \sim \mathcal{N}\left(0, 2/n_{l}\right)$$
That is, a zero-centered Gaussian with standard deviation of $\sqrt{2/{n}_{l}}$ (variance shown in equation above). Biases are initialized at $0$.
Source: Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet ClassificationPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Classification | 65 | 9.49% |
Self-Supervised Learning | 51 | 7.45% |
Classification | 34 | 4.96% |
Semantic Segmentation | 28 | 4.09% |
Object Detection | 20 | 2.92% |
Quantization | 14 | 2.04% |
Autonomous Driving | 10 | 1.46% |
Federated Learning | 9 | 1.31% |
Image Segmentation | 8 | 1.17% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |