TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems

14 Mar 2016Martín Abadi • Ashish Agarwal • Paul Barham • Eugene Brevdo • Zhifeng Chen • Craig Citro • Greg S. Corrado • Andy Davis • Jeffrey Dean • Matthieu Devin • Sanjay Ghemawat • Ian Goodfellow • Andrew Harp • Geoffrey Irving • Michael Isard • Yangqing Jia • Rafal Jozefowicz • Lukasz Kaiser • Manjunath Kudlur • Josh Levenberg • Dan Mane • Rajat Monga • Sherry Moore • Derek Murray • Chris Olah • Mike Schuster • Jonathon Shlens • Benoit Steiner • Ilya Sutskever • Kunal Talwar • Paul Tucker • Vincent Vanhoucke • Vijay Vasudevan • Fernanda Viegas • Oriol Vinyals • Pete Warden • Martin Wattenberg • Martin Wicke • Yuan Yu • Xiaoqiang Zheng

TensorFlow is an interface for expressing machine learning algorithms, and an implementation for executing such algorithms. A computation expressed using TensorFlow can be executed with little or no change on a wide variety of heterogeneous systems, ranging from mobile devices such as phones and tablets up to large-scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards. The system is flexible and can be used to express a wide variety of algorithms, including training and inference algorithms for deep neural network models, and it has been used for conducting research and for deploying machine learning systems into production across more than a dozen areas of computer science and other fields, including speech recognition, computer vision, robotics, information retrieval, natural language processing, geographic information extraction, and computational drug discovery.

Full paper


No evaluation results yet. Help compare this paper to other papers by submitting the tasks and evaluation metrics from the paper.