Entropy Regularization is a type of regularization used in reinforcement learning. For on-policy policy gradient based methods like A3C, the same mutual reinforcement behaviour leads to a highly-peaked $\pi\left(a\mid{s}\right)$ towards a few actions or action sequences, since it is easier for the actor and critic to overoptimise to a small portion of the environment. To reduce this problem, entropy regularization adds an entropy term to the loss to promote action diversity:
$$H(X) = -\sum\pi\left(x\right)\log\left(\pi\left(x\right)\right) $$
Image Credit: Wikipedia
Source: Asynchronous Methods for Deep Reinforcement LearningPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Autonomous Driving | 123 | 11.91% |
Reinforcement Learning (RL) | 99 | 9.58% |
Reinforcement Learning | 87 | 8.42% |
Deep Reinforcement Learning | 61 | 5.91% |
Autonomous Vehicles | 44 | 4.26% |
Decision Making | 41 | 3.97% |
Language Modelling | 29 | 2.81% |
Imitation Learning | 23 | 2.23% |
Object Detection | 21 | 2.03% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |