1 code implementation • ICCV 2023 • Hyundong Jin, Gyeong-hyeon Kim, Chanho Ahn, Eunwoo Kim
The base network learns knowledge of sequential tasks, and the sparsity-inducing hypernetwork generates parameters for each time step for evolving old knowledge.
1 code implementation • Conference 2022 • Hyundong Jin, Eunwoo Kim
In this work, we propose a novel approach to differentiate helpful and harmful information for old tasks using a model search to learn a current task effectively.
Ranked #1 on Continual Learning on Split MNIST (5 tasks)
no code implementations • ICCV 2019 • Chanho Ahn, Eunwoo Kim, Songhwai Oh
To this end, we propose an efficient approach to exploit a compact but accurate model in a backbone architecture for each instance of all tasks.
no code implementations • CVPR 2019 • Eunwoo Kim, Chanho Ahn, Philip H. S. Torr, Songhwai Oh
To this end, we propose a novel network architecture producing multiple networks of different configurations, termed deep virtual networks (DVNs), for different tasks.
no code implementations • CVPR 2018 • Eunwoo Kim, Chanho Ahn, Songhwai Oh
A nested sparse network consists of multiple levels of networks with a different sparsity ratio associated with each level, and higher level networks share parameters with lower level networks to enable stable nested learning.
no code implementations • CVPR 2015 • Eunwoo Kim, Minsik Lee, Songhwai Oh
The proposed method is applied to a number of low-rank matrix approximation problems to demonstrate its efficiency in the presence of heavy corruptions and to show its effectiveness and robustness compared to the existing methods.