Search Results for author: Gyuhak Kim

Found 10 papers, 6 papers with code

Parameter-Level Soft-Masking for Continual Learning

1 code implementation26 Jun 2023 Tatsuya Konishi, Mori Kurokawa, Chihiro Ono, Zixuan Ke, Gyuhak Kim, Bing Liu

Although several techniques have achieved learning with no CF, they attain it by letting each task monopolize a sub-network in a shared network, which seriously limits knowledge transfer (KT) and causes over-consumption of the network capacity, i. e., as more tasks are learned, the performance deteriorates.

Continual Learning Incremental Learning +1

Open-World Continual Learning: Unifying Novelty Detection and Continual Learning

no code implementations20 Apr 2023 Gyuhak Kim, Changnan Xiao, Tatsuya Konishi, Zixuan Ke, Bing Liu

The key theoretical result is that regardless of whether WP and OOD detection (or TP) are defined explicitly or implicitly by a CIL algorithm, good WP and good OOD detection are necessary and sufficient conditions for good CIL, which unifies novelty or OOD detection and continual learning (CIL, in particular).

Class Incremental Learning Incremental Learning +2

A Multi-Head Model for Continual Learning via Out-of-Distribution Replay

3 code implementations20 Aug 2022 Gyuhak Kim, Zixuan Ke, Bing Liu

Instead of using the saved samples in memory to update the network for previous tasks/classes in the existing approach, MORE leverages the saved samples to build a task specific classifier (adding a new classification head) without updating the network learned for previous tasks/classes.

Class Incremental Learning Incremental Learning +1

Continual Learning Based on OOD Detection and Task Masking

1 code implementation17 Mar 2022 Gyuhak Kim, Sepideh Esmaeilpour, Changnan Xiao, Bing Liu

Existing continual learning techniques focus on either task incremental learning (TIL) or class incremental learning (CIL) problem, but not both.

Class Incremental Learning Incremental Learning +1

Partially Relaxed Masks for Lightweight Knowledge Transfer without Forgetting in Continual Learning

no code implementations29 Sep 2021 Tatsuya Konishi, Mori Kurokawa, Roberto Legaspi, Chihiro Ono, Zixuan Ke, Gyuhak Kim, Bing Liu

The goal of this work is to endow such systems with the additional ability to transfer knowledge among tasks when the tasks are similar and have shared knowledge to achieve higher accuracy.

Continual Learning Incremental Learning +1

Continual Learning Using Pseudo-Replay via Latent Space Sampling

no code implementations29 Sep 2021 Gyuhak Kim, Sepideh Esmaeilpour, Zixuan Ke, Tatsuya Konishi, Bing Liu

PLS is not only simple and efficient but also does not invade data privacy due to the fact that it works in the latent feature space.

Class Incremental Learning Incremental Learning

Continual Learning via Principal Components Projection

no code implementations25 Sep 2019 Gyuhak Kim, Bing Liu

The idea is that in learning a new task, if we can ensure that the gradient updates will only occur in the orthogonal directions to the input vectors of the previous tasks, then the weight updates for learning the new task will not affect the previous tasks.

Continual Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.