Search Results for author: Seunghak Lee

Found 9 papers, 0 papers with code

Fast Dimensional Analysis for Root Cause Investigation in a Large-Scale Service Environment

no code implementations1 Nov 2019 Fred Lin, Keyur Muzumdar, Nikolay Pavlovich Laptev, Mihai-Valentin Curelea, Seunghak Lee, Sriram Sankar

In this paper we present a fast dimensional analysis framework that automates the root cause analysis on structured logs with improved scalability.

Stability Selection for Structured Variable Selection

no code implementations13 Dec 2017 George Philipp, Seunghak Lee, Eric P. Xing

Recently, a meta-algorithm called Stability Selection was proposed that can provide reliable finite-sample control of the number of false positives.

Variable Selection

On Model Parallelization and Scheduling Strategies for Distributed Machine Learning

no code implementations NeurIPS 2014 Seunghak Lee, Jin Kyu Kim, Xun Zheng, Qirong Ho, Garth A. Gibson, Eric P. Xing

Distributed machine learning has typically been approached from a data parallel perspective, where big data are partitioned to multiple workers and an algorithm is executed concurrently over different data subsets under various synchronization schemes to ensure speed-up and/or correctness.

BIG-bench Machine Learning Scheduling

Screening Rules for Overlapping Group Lasso

no code implementations25 Oct 2014 Seunghak Lee, Eric P. Xing

However, screening for overlapping group lasso remains an open challenge because the overlaps between groups make it infeasible to test each group independently.

Primitives for Dynamic Big Model Parallelism

no code implementations18 Jun 2014 Seunghak Lee, Jin Kyu Kim, Xun Zheng, Qirong Ho, Garth A. Gibson, Eric P. Xing

When training large machine learning models with many variables or parameters, a single machine is often inadequate since the model may be too large to fit in memory, while training can take a long time even with stochastic updates.

Scheduling

Petuum: A New Platform for Distributed Machine Learning on Big Data

no code implementations30 Dec 2013 Eric P. Xing, Qirong Ho, Wei Dai, Jin Kyu Kim, Jinliang Wei, Seunghak Lee, Xun Zheng, Pengtao Xie, Abhimanu Kumar, Yao-Liang Yu

What is a systematic way to efficiently apply a wide spectrum of advanced ML programs to industrial scale problems, using Big Models (up to 100s of billions of parameters) on Big Data (up to terabytes or petabytes)?

BIG-bench Machine Learning Scheduling

Structure-Aware Dynamic Scheduler for Parallel Machine Learning

no code implementations19 Dec 2013 Seunghak Lee, Jin Kyu Kim, Qirong Ho, Garth A. Gibson, Eric P. Xing

Training large machine learning (ML) models with many variables or parameters can take a long time if one employs sequential procedures even with stochastic updates.

BIG-bench Machine Learning Distributed Computing

More Effective Distributed ML via a Stale Synchronous Parallel Parameter Server

no code implementations NeurIPS 2013 Qirong Ho, James Cipar, Henggang Cui, Seunghak Lee, Jin Kyu Kim, Phillip B. Gibbons, Garth A. Gibson, Greg Ganger, Eric P. Xing

We propose a parameter server system for distributed ML, which follows a Stale Synchronous Parallel (SSP) model of computation that maximizes the time computational workers spend doing useful work on ML algorithms, while still providing correctness guarantees.

Adaptive Multi-Task Lasso: with Application to eQTL Detection

no code implementations NeurIPS 2010 Seunghak Lee, Jun Zhu, Eric P. Xing

To understand the relationship between genomic variations among population and complex diseases, it is essential to detect eQTLs which are associated with phenotypic effects.

Multi-Task Learning regression

Cannot find the paper you are looking for? You can Submit a new open access paper.