D4PG, or Distributed Distributional DDPG, is a policy gradient algorithm that extends upon the DDPG. The improvements include a distributional updates to the DDPG algorithm, combined with the use of multiple distributed workers all writing into the same replay table. The biggest performance gain of other simpler changes was the use of $N$-step returns. The authors found that the use of prioritized experience replay was less crucial to the overall D4PG algorithm especially on harder problems.
Source: Distributed Distributional Deterministic Policy GradientsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
reinforcement Learning | 6 | 37.50% |
Continuous Control | 4 | 25.00% |
Distributional Reinforcement Learning | 2 | 12.50% |
OpenAI Gym | 2 | 12.50% |
BIG-bench Machine Learning | 1 | 6.25% |
Image Generation | 1 | 6.25% |
Component | Type |
|
---|---|---|
![]() |
Stochastic Optimization | |
![]() |
Normalization | |
![]() |
Value Function Estimation | |
![]() |
Replay Memory |