Search Results for author: Jaehong Yoon

Found 21 papers, 12 papers with code

Continual Learners are Incremental Model Generalizers

no code implementations21 Jun 2023 Jaehong Yoon, Sung Ju Hwang, Yue Cao

We believe this paper breaks the barriers between pre-training and fine-tuning steps and leads to a sustainable learning framework in which the continual learner incrementally improves model generalization, yielding better transfer to unseen tasks.

Continual Learning

Progressive Neural Representation for Sequential Video Compilation

1 code implementation20 Jun 2023 Haeyong Kang, Dahyun Kim, Jaehong Yoon, Sung Ju Hwang, Chang D Yoo

Neural Implicit Representations (NIR) have gained significant attention recently due to their ability to represent complex and high-dimensional data.

Forget-free Continual Learning with Soft-Winning SubNetworks

no code implementations27 Mar 2023 Haeyong Kang, Jaehong Yoon, Sultan Rizky Madjid, Sung Ju Hwang, Chang D. Yoo

Inspired by Regularized Lottery Ticket Hypothesis (RLTH), which states that competitive smooth (non-binary) subnetworks exist within a dense network in continual learning tasks, we investigate two proposed architecture-based continual learning methods which sequentially learn and select adaptive binary- (WSN) and non-binary Soft-Subnetworks (SoftNet) for each task.

class-incremental learning Few-Shot Class-Incremental Learning +1

Efficient Video Representation Learning via Motion-Aware Token Selection

1 code implementation19 Nov 2022 Sunil Hwang, Jaehong Yoon, Youngwan Lee, Sung Ju Hwang

Recently emerged Masked Video Modeling techniques demonstrated their potential by significantly outperforming previous methods in self-supervised learning for video.

Object State Change Classification Object State Change Classification on Ego4D +3

On the Soft-Subnetwork for Few-shot Class Incremental Learning

2 code implementations15 Sep 2022 Haeyong Kang, Jaehong Yoon, Sultan Rizky Hikmawan Madjid, Sung Ju Hwang, Chang D. Yoo

Inspired by Regularized Lottery Ticket Hypothesis (RLTH), which hypothesizes that there exist smooth (non-binary) subnetworks within a dense network that achieve the competitive performance of the dense network, we propose a few-shot class incremental learning (FSCIL) method referred to as \emph{Soft-SubNetworks (SoftNet)}.

class-incremental learning Few-Shot Class-Incremental Learning +1

BiTAT: Neural Network Binarization with Task-dependent Aggregated Transformation

no code implementations4 Jul 2022 Geon Park, Jaehong Yoon, Haiyang Zhang, Xing Zhang, Sung Ju Hwang, Yonina C. Eldar

Neural network quantization aims to transform high-precision weights and activations of a given neural network into low-precision weights/activations for reduced memory usage and computation, while preserving the performance of the original model.

Binarization Quantization

Forget-free Continual Learning with Winning Subnetworks

1 code implementation International Conference on Machine Learning 2022 Haeyong Kang, Rusty John Lloyd Mina, Sultan Rizky Hikmawan Madjid, Jaehong Yoon, Mark Hasegawa-Johnson, Sung Ju Hwang, Chang D. Yoo

Inspired by Lottery Ticket Hypothesis that competitive subnetworks exist within a dense network, we propose a continual learning method referred to as Winning SubNetworks (WSN), which sequentially learns and selects an optimal subnetwork for each task.

Continual Learning

Personalized Subgraph Federated Learning

1 code implementation21 Jun 2022 Jinheon Baek, Wonyong Jeong, Jiongdao Jin, Jaehong Yoon, Sung Ju Hwang

To this end, we introduce a new subgraph FL problem, personalized subgraph FL, which focuses on the joint improvement of the interrelated local GNNs rather than learning a single global model, and propose a novel framework, FEDerated Personalized sUBgraph learning (FED-PUB), to tackle it.

Federated Learning

Bitwidth Heterogeneous Federated Learning with Progressive Weight Dequantization

no code implementations23 Feb 2022 Jaehong Yoon, Geon Park, Wonyong Jeong, Sung Ju Hwang

We introduce a pragmatic FL scenario with bitwidth heterogeneity across the participating devices, dubbed as Bitwidth Heterogeneous Federated Learning (BHFL).

Federated Learning

Representational Continuity for Unsupervised Continual Learning

2 code implementations ICLR 2022 Divyam Madaan, Jaehong Yoon, Yuanchun Li, Yunxin Liu, Sung Ju Hwang

Continual learning (CL) aims to learn a sequence of tasks without forgetting the previously acquired knowledge.

Continual Learning

Online Coreset Selection for Rehearsal-based Continual Learning

no code implementations ICLR 2022 Jaehong Yoon, Divyam Madaan, Eunho Yang, Sung Ju Hwang

We validate the effectiveness of our coreset selection mechanism over various standard, imbalanced, and noisy datasets against strong continual learning baselines, demonstrating that it improves task adaptation and prevents catastrophic forgetting in a sample-efficient manner.

Continual Learning

Rapid Neural Pruning for Novel Datasets with Set-based Task-Adaptive Meta-Pruning

no code implementations1 Jan 2021 Minyoung Song, Jaehong Yoon, Eunho Yang, Sung Ju Hwang

As deep neural networks are growing in size and being increasingly deployed to more resource-limited devices, there has been a recent surge of interest in network pruning methods, which aim to remove less important weights or activations of a given network.

Cloud Computing Network Pruning

Federated Semi-Supervised Learning with Inter-Client Consistency & Disjoint Learning

1 code implementation ICLR 2021 Wonyong Jeong, Jaehong Yoon, Eunho Yang, Sung Ju Hwang

Through extensive experimental validation of our method in the two different scenarios, we show that our method outperforms both local semi-supervised learning and baselines which naively combine federated learning with semi-supervised learning.

Federated Learning

Rapid Structural Pruning of Neural Networks with Set-based Task-Adaptive Meta-Pruning

no code implementations22 Jun 2020 Minyoung Song, Jaehong Yoon, Eunho Yang, Sung Ju Hwang

As deep neural networks are growing in size and being increasingly deployed to more resource-limited devices, there has been a recent surge of interest in network pruning methods, which aim to remove less important weights or activations of a given network.

Cloud Computing Network Pruning

Federated Continual Learning with Weighted Inter-client Transfer

1 code implementation6 Mar 2020 Jaehong Yoon, Wonyong Jeong, Giwoong Lee, Eunho Yang, Sung Ju Hwang

There has been a surge of interest in continual learning and federated learning, both of which are important in deep neural networks in real-world scenarios.

Continual Learning Federated Learning +1

Scalable and Order-robust Continual Learning with Additive Parameter Decomposition

1 code implementation ICLR 2020 Jaehong Yoon, Saehoon Kim, Eunho Yang, Sung Ju Hwang

First, a continual learning model should effectively handle catastrophic forgetting and be efficient to train even with a large number of tasks.

Continual Learning Fairness +1

ADAPTIVE NETWORK SPARSIFICATION VIA DEPENDENT VARIATIONAL BETA-BERNOULLI DROPOUT

no code implementations27 Sep 2018 Juho Lee, Saehoon Kim, Jaehong Yoon, Hae Beom Lee, Eunho Yang, Sung Ju Hwang

With such input-independent dropout, each neuron is evolved to be generic across inputs, which makes it difficult to sparsify networks without accuracy loss.

Adaptive Network Sparsification with Dependent Variational Beta-Bernoulli Dropout

1 code implementation28 May 2018 Juho Lee, Saehoon Kim, Jaehong Yoon, Hae Beom Lee, Eunho Yang, Sung Ju Hwang

With such input-independent dropout, each neuron is evolved to be generic across inputs, which makes it difficult to sparsify networks without accuracy loss.

Lifelong Learning with Dynamically Expandable Networks

3 code implementations ICLR 2018 Jaehong Yoon, Eunho Yang, Jeongtae Lee, Sung Ju Hwang

We propose a novel deep network architecture for lifelong learning which we refer to as Dynamically Expandable Network (DEN), that can dynamically decide its network capacity as it trains on a sequence of tasks, to learn a compact overlapping knowledge sharing structure among tasks.

Combined Group and Exclusive Sparsity for Deep Neural Networks

1 code implementation ICML 2017 Jaehong Yoon, Sung Ju Hwang

The number of parameters in a deep neural network is usually very large, which helps with its learning capacity but also hinders its scalability and practicality due to memory/time inefficiency and overfitting.

Cannot find the paper you are looking for? You can Submit a new open access paper.