Search Results for author: Eunji Jeong

Found 6 papers, 2 papers with code

Terra: Imperative-Symbolic Co-Execution of Imperative Deep Learning Programs

no code implementations NeurIPS 2021 Taebum Kim, Eunji Jeong, Geon-Woo Kim, Yunmo Koo, Sehoon Kim, Gyeong-In Yu, Byung-Gon Chun

Recently, several systems have been proposed to combine the usability of imperative programming with the optimized performance of symbolic graph execution.

Nimble: Lightweight and Parallel GPU Task Scheduling for Deep Learning

1 code implementation NeurIPS 2020 Woosuk Kwon, Gyeong-In Yu, Eunji Jeong, Byung-Gon Chun

Ideally, DL frameworks should be able to fully utilize the computation power of GPUs such that the running time depends on the amount of computation assigned to GPUs.

Scheduling

Stage-based Hyper-parameter Optimization for Deep Learning

no code implementations24 Nov 2019 Ahnjae Shin, Dong-Jin Shin, Sungwoo Cho, Do Yoon Kim, Eunji Jeong, Gyeong-In Yu, Byung-Gon Chun

As deep learning techniques advance more than ever, hyper-parameter optimization is the new major workload in deep learning clusters.

JANUS: Fast and Flexible Deep Learning via Symbolic Graph Execution of Imperative Programs

no code implementations4 Dec 2018 Eunji Jeong, Sungwoo Cho, Gyeong-In Yu, Joo Seong Jeong, Dong-Jin Shin, Byung-Gon Chun

The rapid evolution of deep neural networks is demanding deep learning (DL) frameworks not only to satisfy the requirement of quickly executing large computations, but also to support straightforward programming models for quickly implementing and experimenting with complex network structures.

Improving the Expressiveness of Deep Learning Frameworks with Recursion

no code implementations4 Sep 2018 Eunji Jeong, Joo Seong Jeong, Soojeong Kim, Gyeong-In Yu, Byung-Gon Chun

Recursive neural networks have widely been used by researchers to handle applications with recursively or hierarchically structured data.

Parallax: Automatic Data-Parallel Training of Deep Neural Networks

1 code implementation8 Aug 2018 Soojeong Kim, Gyeong-In Yu, Hojin Park, Sungwoo Cho, Eunji Jeong, Hyeonmin Ha, Sanha Lee, Joo Seong Jeong, Byung-Gon Chun

The employment of high-performance servers and GPU accelerators for training deep neural network models have greatly accelerated recent advances in machine learning (ML).

Distributed, Parallel, and Cluster Computing

Cannot find the paper you are looking for? You can Submit a new open access paper.