D4PG, or Distributed Distributional DDPG, is a policy gradient algorithm that extends upon the DDPG. The improvements include a distributional updates to the DDPG algorithm, combined with the use of multiple distributed workers all writing into the same replay table. The biggest performance gain of other simpler changes was the use of $N$-step returns. The authors found that the use of prioritized experience replay was less crucial to the overall D4PG algorithm especially on harder problems.
Source: Distributed Distributional Deterministic Policy GradientsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Reinforcement Learning (RL) | 7 | 36.84% |
Continuous Control | 4 | 21.05% |
OpenAI Gym | 3 | 15.79% |
Distributional Reinforcement Learning | 2 | 10.53% |
Benchmarking | 1 | 5.26% |
BIG-bench Machine Learning | 1 | 5.26% |
Image Generation | 1 | 5.26% |
Component | Type |
|
---|---|---|
Adam
|
Stochastic Optimization | |
Batch Normalization
|
Normalization | |
N-step Returns
|
Value Function Estimation | |
Prioritized Experience Replay
|
Replay Memory |