Search Results for author: Phillip B. Gibbons

Found 12 papers, 4 papers with code

ACRoBat: Optimizing Auto-batching of Dynamic Deep Learning at Compile Time

no code implementations17 May 2023 Pratik Fegade, Tianqi Chen, Phillip B. Gibbons, Todd C. Mowry

Dynamic control flow is an important technique often used to design expressive and efficient deep learning computations for applications such as text parsing, machine translation, exiting early out of deep models and so on.

Code Generation Machine Translation +1

Cortex: A Compiler for Recursive Deep Learning Models

no code implementations2 Nov 2020 Pratik Fegade, Tianqi Chen, Phillip B. Gibbons, Todd C. Mowry

Optimizing deep learning models is generally performed in two steps: (i) high-level graph optimizations such as kernel fusion and (ii) low level kernel optimizations such as those found in vendor libraries.

DriftSurf: A Risk-competitive Learning Algorithm under Concept Drift

no code implementations13 Mar 2020 Ashraf Tahmasbi, Ellango Jothimurugesan, Srikanta Tirthapura, Phillip B. Gibbons

The advantage of our approach is that we can use aggressive drift detection in the stable state to achieve a high detection rate, but mitigate the false positive rate of standalone drift detection via a reactive state that reacts quickly to true drifts while eliminating most false positives.

The Non-IID Data Quagmire of Decentralized Machine Learning

1 code implementation ICML 2020 Kevin Hsieh, Amar Phanishayee, Onur Mutlu, Phillip B. Gibbons

Our study shows that: (i) skewed data labels are a fundamental and pervasive problem for decentralized learning, causing significant accuracy loss across many ML applications, DNN models, training datasets, and decentralized learning algorithms; (ii) the problem is particularly challenging for DNN models with batch normalization; and (iii) the degree of data skew is a key determinant of the difficulty of the problem.

BIG-bench Machine Learning

MLtuner: System Support for Automatic Machine Learning Tuning

1 code implementation20 Mar 2018 Henggang Cui, Gregory R. Ganger, Phillip B. Gibbons

MLtuner automatically tunes settings for training tunables (such as the learning rate, the momentum, the mini-batch size, and the data staleness bound) that have a significant impact on large-scale machine learning (ML) performance.

BIG-bench Machine Learning General Classification +2

Focus: Querying Large Video Datasets with Low Latency and Low Cost

no code implementations10 Jan 2018 Kevin Hsieh, Ganesh Ananthanarayanan, Peter Bodik, Paramvir Bahl, Matthai Philipose, Phillip B. Gibbons, Onur Mutlu

Focus handles the lower accuracy of the cheap CNNs by judiciously leveraging expensive CNNs at query-time.

More Effective Distributed ML via a Stale Synchronous Parallel Parameter Server

no code implementations NeurIPS 2013 Qirong Ho, James Cipar, Henggang Cui, Seunghak Lee, Jin Kyu Kim, Phillip B. Gibbons, Garth A. Gibson, Greg Ganger, Eric P. Xing

We propose a parameter server system for distributed ML, which follows a Stale Synchronous Parallel (SSP) model of computation that maximizes the time computational workers spend doing useful work on ML algorithms, while still providing correctness guarantees.

Cannot find the paper you are looking for? You can Submit a new open access paper.