no code implementations • 23 Feb 2022 • Jaehong Yoon, Geon Park, Wonyong Jeong, Sung Ju Hwang
We introduce a pragmatic FL scenario with bitwidth heterogeneity across the participating devices, dubbed as Bitwidth Heterogeneous Federated Learning (BHFL).
1 code implementation • ICLR 2022 • Divyam Madaan, Jaehong Yoon, Yuanchun Li, Yunxin Liu, Sung Ju Hwang
Continual learning (CL) aims to learn a sequence of tasks without forgetting the previously acquired knowledge.
no code implementations • ICLR 2022 • Jaehong Yoon, Divyam Madaan, Eunho Yang, Sung Ju Hwang
We validate the effectiveness of our coreset selection mechanism over various standard, imbalanced, and noisy datasets against strong continual learning baselines, demonstrating that it improves task adaptation and prevents catastrophic forgetting in a sample-efficient manner.
no code implementations • 1 Jan 2021 • Minyoung Song, Jaehong Yoon, Eunho Yang, Sung Ju Hwang
As deep neural networks are growing in size and being increasingly deployed to more resource-limited devices, there has been a recent surge of interest in network pruning methods, which aim to remove less important weights or activations of a given network.
1 code implementation • ICLR 2021 • Wonyong Jeong, Jaehong Yoon, Eunho Yang, Sung Ju Hwang
Through extensive experimental validation of our method in the two different scenarios, we show that our method outperforms both local semi-supervised learning and baselines which naively combine federated learning with semi-supervised learning.
no code implementations • 22 Jun 2020 • Minyoung Song, Jaehong Yoon, Eunho Yang, Sung Ju Hwang
As deep neural networks are growing in size and being increasingly deployed to more resource-limited devices, there has been a recent surge of interest in network pruning methods, which aim to remove less important weights or activations of a given network.
1 code implementation • 6 Mar 2020 • Jaehong Yoon, Wonyong Jeong, Giwoong Lee, Eunho Yang, Sung Ju Hwang
There has been a surge of interest in continual learning and federated learning, both of which are important in deep neural networks in real-world scenarios.
1 code implementation • ICLR 2020 • Jaehong Yoon, Saehoon Kim, Eunho Yang, Sung Ju Hwang
First, a continual learning model should effectively handle catastrophic forgetting and be efficient to train even with a large number of tasks.
no code implementations • 27 Sep 2018 • Juho Lee, Saehoon Kim, Jaehong Yoon, Hae Beom Lee, Eunho Yang, Sung Ju Hwang
With such input-independent dropout, each neuron is evolved to be generic across inputs, which makes it difficult to sparsify networks without accuracy loss.
1 code implementation • 28 May 2018 • Juho Lee, Saehoon Kim, Jaehong Yoon, Hae Beom Lee, Eunho Yang, Sung Ju Hwang
With such input-independent dropout, each neuron is evolved to be generic across inputs, which makes it difficult to sparsify networks without accuracy loss.
3 code implementations • ICLR 2018 • Jaehong Yoon, Eunho Yang, Jeongtae Lee, Sung Ju Hwang
We propose a novel deep network architecture for lifelong learning which we refer to as Dynamically Expandable Network (DEN), that can dynamically decide its network capacity as it trains on a sequence of tasks, to learn a compact overlapping knowledge sharing structure among tasks.
1 code implementation • ICML 2017 • Jaehong Yoon, Sung Ju Hwang
The number of parameters in a deep neural network is usually very large, which helps with its learning capacity but also hinders its scalability and practicality due to memory/time inefficiency and overfitting.