no code implementations • 21 Mar 2025 • Jungkyoo Shin, Bumsoo Kim, Eunwoo Kim
In this paper, we propose a novel class anchor alignment approach that leverages class probability distributions for multi-modal representation learning.
no code implementations • 10 Mar 2025 • Jiho Lee, Hayun Lee, Jonghyeon Kim, Kyungjae Lee, Eunwoo Kim
In robot task planning, large language models (LLMs) have shown significant promise in generating complex and long-horizon action sequences.
no code implementations • 12 Jul 2024 • Eunwoo Kim, Un Yang, Cheol Lae Roh, Stefano Ermon
Conventional anomaly detection techniques based on reconstruction via denoising diffusion model are widely used due to their ability to identify anomaly locations and shapes with high performance.
1 code implementation • ICCV 2023 • Hyundong Jin, Gyeong-hyeon Kim, Chanho Ahn, Eunwoo Kim
The base network learns knowledge of sequential tasks, and the sparsity-inducing hypernetwork generates parameters for each time step for evolving old knowledge.
1 code implementation • Conference 2022 • Hyundong Jin, Eunwoo Kim
In this work, we propose a novel approach to differentiate helpful and harmful information for old tasks using a model search to learn a current task effectively.
Ranked #1 on
Continual Learning
on Split MNIST (5 tasks)
no code implementations • ICCV 2019 • Chanho Ahn, Eunwoo Kim, Songhwai Oh
To this end, we propose an efficient approach to exploit a compact but accurate model in a backbone architecture for each instance of all tasks.
no code implementations • CVPR 2019 • Eunwoo Kim, Chanho Ahn, Philip H. S. Torr, Songhwai Oh
To this end, we propose a novel network architecture producing multiple networks of different configurations, termed deep virtual networks (DVNs), for different tasks.
no code implementations • CVPR 2018 • Eunwoo Kim, Chanho Ahn, Songhwai Oh
A nested sparse network consists of multiple levels of networks with a different sparsity ratio associated with each level, and higher level networks share parameters with lower level networks to enable stable nested learning.
no code implementations • CVPR 2015 • Eunwoo Kim, Minsik Lee, Songhwai Oh
The proposed method is applied to a number of low-rank matrix approximation problems to demonstrate its efficiency in the presence of heavy corruptions and to show its effectiveness and robustness compared to the existing methods.