Ray: A Distributed Framework for Emerging AI Applications

16 Dec 2017Philipp Moritz • Robert Nishihara • Stephanie Wang • Alexey Tumanov • Richard Liaw • Eric Liang • Melih Elibol • Zongheng Yang • William Paul • Michael I. Jordan • Ion Stoica

These applications impose new and demanding systems requirements, both in terms of performance and flexibility. To meet the performance requirements, Ray employs a distributed scheduler and a distributed and fault-tolerant store to manage the system's control state. In our experiments, we demonstrate scaling beyond 1.8 million tasks per second and better performance than existing specialized systems for several challenging reinforcement learning applications.

Full paper

Evaluation


No evaluation results yet. Help compare this paper to other papers by submitting the tasks and evaluation metrics from the paper.