Initialization

Layer-Sequential Unit-Variance Initialization

Introduced by Mishkin et al. in All you need is a good init

Layer-Sequential Unit-Variance Initialization (LSUV) is a simple method for weight initialization for deep net learning. The initialization strategy involves the following two step:

1) First, pre-initialize weights of each convolution or inner-product layer with orthonormal matrices.

2) Second, proceed from the first to the final layer, normalizing the variance of the output of each layer to be equal to one.

Source: All you need is a good init

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Image Classification 1 100.00%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories