Entropy Regularization is a type of regularization used in reinforcement learning. For on-policy policy gradient based methods like A3C, the same mutual reinforcement behaviour leads to a highly-peaked $\pi\left(a\mid{s}\right)$ towards a few actions or action sequences, since it is easier for the actor and critic to overoptimise to a small portion of the environment. To reduce this problem, entropy regularization adds an entropy term to the loss to promote action diversity:
$$H(X) = -\sum\pi\left(x\right)\log\left(\pi\left(x\right)\right) $$
Image Credit: Wikipedia
Source: Asynchronous Methods for Deep Reinforcement LearningPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Reinforcement Learning (RL) | 237 | 36.92% |
Continuous Control | 33 | 5.14% |
Atari Games | 27 | 4.21% |
OpenAI Gym | 23 | 3.58% |
Decision Making | 17 | 2.65% |
Multi-agent Reinforcement Learning | 16 | 2.49% |
Language Modelling | 9 | 1.40% |
Image Classification | 9 | 1.40% |
Management | 9 | 1.40% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |