Distributed Deep Reinforcement Learning: Learn how to play Atari games in 21 minutes

We present a study in Distributed Deep Reinforcement Learning (DDRL) focused on scalability of a state-of-the-art Deep Reinforcement Learning algorithm known as Batch Asynchronous Advantage ActorCritic (BA3C). We show that using the Adam optimization algorithm with a batch size of up to 2048 is a viable choice for carrying out large scale machine learning computations. This, combined with careful reexamination of the optimizer's hyperparameters, using synchronous training on the node level (while keeping the local, single node part of the algorithm asynchronous) and minimizing the memory footprint of the model, allowed us to achieve linear scaling for up to 64 CPU nodes. This corresponds to a training time of 21 minutes on 768 CPU cores, as opposed to 10 hours when using a single node with 24 cores achieved by a baseline single-node implementation.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Atari Games Atari 2600 Beam Rider DDRL A3C Score 14900 # 24
Atari Games Atari 2600 Boxing DDRL A3C Score 98 # 21
Atari Games Atari 2600 Breakout DDRL A3C Score 350 # 37
Atari Games Atari 2600 Pong DDRL A3C Score 20 # 28
Atari Games Atari 2600 Seaquest DDRL A3C Score 1832 # 39
Atari Games Atari 2600 Space Invaders DDRL A3C Score 650 # 48

Methods