Entropy Regularization is a type of regularization used in reinforcement learning. For on-policy policy gradient based methods like A3C, the same mutual reinforcement behaviour leads to a highly-peaked $\pi\left(a\mid{s}\right)$ towards a few actions or action sequences, since it is easier for the actor and critic to overoptimise to a small portion of the environment. To reduce this problem, entropy regularization adds an entropy term to the loss to promote action diversity:
$$H(X) = -\sum\pi\left(x\right)\log\left(\pi\left(x\right)\right) $$
Image Credit: Wikipedia
Source: Asynchronous Methods for Deep Reinforcement LearningPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Continuous Control | 25 | 11.11% |
Atari Games | 22 | 9.78% |
OpenAI Gym | 16 | 7.11% |
Decision Making | 8 | 3.56% |
Image Classification | 8 | 3.56% |
Multi-agent Reinforcement Learning | 7 | 3.11% |
Semantic Segmentation | 7 | 3.11% |
Object Detection | 6 | 2.67% |
Domain Adaptation | 6 | 2.67% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |