A Noisy Linear Layer is a linear layer with parametric noise added to the weights. This induced stochasticity can be used in reinforcement learning networks for the agent's policy to aid efficient exploration. The parameters of the noise are learned with gradient descent along with any other remaining network weights. Factorized Gaussian noise is the type of noise usually employed.
The noisy linear layer takes the form:
$$y = \left(b + Wx\right) + \left(b_{noisy}\odot\epsilon^{b}+\left(W_{noisy}\odot\epsilon^{w}\right)x\right) $$
where $\epsilon^{b}$ and $\epsilon^{w}$ are random variables.
Source: Noisy Networks for ExplorationPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Reinforcement Learning (RL) | 8 | 47.06% |
Atari Games | 3 | 17.65% |
Efficient Exploration | 2 | 11.76% |
Decision Making | 1 | 5.88% |
Ensemble Learning | 1 | 5.88% |
Game of Go | 1 | 5.88% |
Montezuma's Revenge | 1 | 5.88% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |