Mean Actor Critic

We propose a new algorithm, Mean Actor-Critic (MAC), for discrete-action continuous-state reinforcement learning. MAC is a policy gradient algorithm that uses the agent's explicit representation of all action values to estimate the gradient of the policy, rather than using only the actions that were actually executed. We prove that this approach reduces variance in the policy gradient estimate relative to traditional actor-critic methods. We show empirical results on two control domains and on six Atari games, where MAC is competitive with state-of-the-art policy search algorithms.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Atari Games Atari 2600 Beam Rider MAC Score 6072 # 36
Atari Games Atari 2600 Breakout MAC Score 372.7 # 29
Atari Games Atari 2600 Pong MAC Score 10.6 # 45
Atari Games Atari 2600 Q*Bert MAC Score 243.4 # 52
Atari Games Atari 2600 Seaquest MAC Score 1703.4 # 41
Atari Games Atari 2600 Space Invaders MAC Score 1173.1 # 41
Continuous Control Cart Pole (OpenAI Gym) MAC Score 178.3 # 1
Continuous Control Lunar Lander (OpenAI Gym) MAC Score 163.5 # 1

Methods


No methods listed for this paper. Add relevant methods here