WarpDrive: Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning on a GPU

31 Aug 2021  ·  Tian Lan, Sunil Srinivasa, Huan Wang, Stephan Zheng ·

Deep reinforcement learning (RL) is a powerful framework to train decision-making models in complex environments. However, RL can be slow as it requires repeated interaction with a simulation of the environment. In particular, there are key system engineering bottlenecks when using RL in complex environments that feature multiple agents with high-dimensional state, observation, or action spaces. We present WarpDrive, a flexible, lightweight, and easy-to-use open-source RL framework that implements end-to-end deep multi-agent RL on a single GPU (Graphics Processing Unit), built on PyCUDA and PyTorch. Using the extreme parallelization capability of GPUs, WarpDrive enables orders-of-magnitude faster RL compared to common implementations that blend CPU simulations and GPU models. Our design runs simulations and the agents in each simulation in parallel. It eliminates data copying between CPU and GPU. It also uses a single simulation data store on the GPU that is safely updated in-place. WarpDrive provides a lightweight Python interface and flexible environment wrappers that are easy to use and extend. Together, this allows the user to easily run thousands of concurrent multi-agent simulations and train on extremely large batches of experience. Through extensive experiments, we verify that WarpDrive provides high-throughput and scales almost linearly to many agents and parallel environments. For example, WarpDrive yields 2.9 million environment steps/second with 2000 environments and 1000 agents (at least 100x higher throughput compared to a CPU implementation) in a benchmark Tag simulation. As such, WarpDrive is a fast and extensible multi-agent RL platform to significantly accelerate research and development.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here