ACER, or Actor Critic with Experience Replay, is an actor-critic deep reinforcement learning agent with experience replay. It can be seen as an off-policy extension of A3C, where the off-policy estimator is made feasible by:
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Reinforcement Learning (RL) | 5 | 26.32% |
Face Anti-Spoofing | 3 | 15.79% |
Face Recognition | 3 | 15.79% |
Problem Decomposition | 2 | 10.53% |
Face Presentation Attack Detection | 1 | 5.26% |
Automatic Speech Recognition (ASR) | 1 | 5.26% |
Benchmarking | 1 | 5.26% |
Speech Recognition | 1 | 5.26% |
Spoken Dialogue Systems | 1 | 5.26% |