ACER, or Actor Critic with Experience Replay, is an actor-critic deep reinforcement learning agent with experience replay. It can be seen as an off-policy extension of A3C, where the off-policy estimator is made feasible by:
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
reinforcement Learning | 5 | 38.46% |
Problem Decomposition | 2 | 15.38% |
Automatic Speech Recognition | 1 | 7.69% |
Speech Recognition | 1 | 7.69% |
Face Anti-Spoofing | 1 | 7.69% |
Face Recognition | 1 | 7.69% |
Spoken Dialogue Systems | 1 | 7.69% |
Continuous Control | 1 | 7.69% |