Search Results for author: Gregory R. Ganger

Found 4 papers, 3 papers with code

Pollux: Co-adaptive Cluster Scheduling for Goodput-Optimized Deep Learning

2 code implementations27 Aug 2020 Aurick Qiao, Sang Keun Choe, Suhas Jayaram Subramanya, Willie Neiswanger, Qirong Ho, Hao Zhang, Gregory R. Ganger, Eric P. Xing

Some recent schedulers choose job resources for users, but do so without awareness of how DL training can be re-optimized to better utilize the provided resources.

Fairness Scheduling

Accelerating Deep Learning by Focusing on the Biggest Losers

2 code implementations2 Oct 2019 Angela H. Jiang, Daniel L. -K. Wong, Giulio Zhou, David G. Andersen, Jeffrey Dean, Gregory R. Ganger, Gauri Joshi, Michael Kaminksy, Michael Kozuch, Zachary C. Lipton, Padmanabhan Pillai

This paper introduces Selective-Backprop, a technique that accelerates the training of deep neural networks (DNNs) by prioritizing examples with high loss at each iteration.

MLtuner: System Support for Automatic Machine Learning Tuning

1 code implementation20 Mar 2018 Henggang Cui, Gregory R. Ganger, Phillip B. Gibbons

MLtuner automatically tunes settings for training tunables (such as the learning rate, the momentum, the mini-batch size, and the data staleness bound) that have a significant impact on large-scale machine learning (ML) performance.

BIG-bench Machine Learning General Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.