Experience Replay is a replay memory technique used in reinforcement learning where we store the agent’s experiences at each time-step, $e_{t} = \left(s_{t}, a_{t}, r_{t}, s_{t+1}\right)$ in a data-set $D = e_{1}, \cdots, e_{N}$ , pooled over many episodes into a replay memory. We then usually sample the memory randomly for a minibatch of experience, and use this to learn off-policy, as with Deep Q-Networks. This tackles the problem of autocorrelation leading to unstable training, by making the problem more like a supervised learning problem.
Image Credit: Hands-On Reinforcement Learning with Python, Sudharsan Ravichandiran
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Reinforcement Learning (RL) | 295 | 36.74% |
Continuous Control | 57 | 7.10% |
Continual Learning | 53 | 6.60% |
OpenAI Gym | 30 | 3.74% |
Decision Making | 27 | 3.36% |
Multi-agent Reinforcement Learning | 24 | 2.99% |
Atari Games | 21 | 2.62% |
Imitation Learning | 14 | 1.74% |
Incremental Learning | 12 | 1.49% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |