Search Results for author: Gyeong-In Yu

Found 9 papers, 4 papers with code

Terra: Imperative-Symbolic Co-Execution of Imperative Deep Learning Programs

no code implementations NeurIPS 2021 Taebum Kim, Eunji Jeong, Geon-Woo Kim, Yunmo Koo, Sehoon Kim, Gyeong-In Yu, Byung-Gon Chun

Recently, several systems have been proposed to combine the usability of imperative programming with the optimized performance of symbolic graph execution.

Nimble: Lightweight and Parallel GPU Task Scheduling for Deep Learning

1 code implementation NeurIPS 2020 Woosuk Kwon, Gyeong-In Yu, Eunji Jeong, Byung-Gon Chun

Ideally, DL frameworks should be able to fully utilize the computation power of GPUs such that the running time depends on the amount of computation assigned to GPUs.

Scheduling

A Tensor Compiler for Unified Machine Learning Prediction Serving

1 code implementation9 Oct 2020 Supun Nakandala, Karla Saur, Gyeong-In Yu, Konstantinos Karanasos, Carlo Curino, Markus Weimer, Matteo Interlandi

Machine Learning (ML) adoption in the enterprise requires simpler and more efficient software infrastructure---the bespoke solutions typical in large web companies are simply untenable.

BIG-bench Machine Learning

Accelerating Multi-Model Inference by Merging DNNs of Different Weights

no code implementations28 Sep 2020 Joo Seong Jeong, Soojeong Kim, Gyeong-In Yu, Yunseong Lee, Byung-Gon Chun

Standardized DNN models that have been proved to perform well on machine learning tasks are widely used and often adopted as-is to solve downstream tasks, forming the transfer learning paradigm.

Transfer Learning

Stage-based Hyper-parameter Optimization for Deep Learning

no code implementations24 Nov 2019 Ahnjae Shin, Dong-Jin Shin, Sungwoo Cho, Do Yoon Kim, Eunji Jeong, Gyeong-In Yu, Byung-Gon Chun

As deep learning techniques advance more than ever, hyper-parameter optimization is the new major workload in deep learning clusters.

Making Classical Machine Learning Pipelines Differentiable: A Neural Translation Approach

1 code implementation10 Jun 2019 Gyeong-In Yu, Saeed Amizadeh, Sehoon Kim, Artidoro Pagnoni, Byung-Gon Chun, Markus Weimer, Matteo Interlandi

To this end, we propose a framework that translates a pre-trained ML pipeline into a neural network and fine-tunes the ML models within the pipeline jointly using backpropagation.

BIG-bench Machine Learning Translation

JANUS: Fast and Flexible Deep Learning via Symbolic Graph Execution of Imperative Programs

no code implementations4 Dec 2018 Eunji Jeong, Sungwoo Cho, Gyeong-In Yu, Joo Seong Jeong, Dong-Jin Shin, Byung-Gon Chun

The rapid evolution of deep neural networks is demanding deep learning (DL) frameworks not only to satisfy the requirement of quickly executing large computations, but also to support straightforward programming models for quickly implementing and experimenting with complex network structures.

Improving the Expressiveness of Deep Learning Frameworks with Recursion

no code implementations4 Sep 2018 Eunji Jeong, Joo Seong Jeong, Soojeong Kim, Gyeong-In Yu, Byung-Gon Chun

Recursive neural networks have widely been used by researchers to handle applications with recursively or hierarchically structured data.

Parallax: Automatic Data-Parallel Training of Deep Neural Networks

1 code implementation8 Aug 2018 Soojeong Kim, Gyeong-In Yu, Hojin Park, Sungwoo Cho, Eunji Jeong, Hyeonmin Ha, Sanha Lee, Joo Seong Jeong, Byung-Gon Chun

The employment of high-performance servers and GPU accelerators for training deep neural network models have greatly accelerated recent advances in machine learning (ML).

Distributed, Parallel, and Cluster Computing

Cannot find the paper you are looking for? You can Submit a new open access paper.