no code implementations • CVPR 2023 • Jongin Lim, Youngdong Kim, Byungjai Kim, Chanho Ahn, Jinwoo Shin, Eunho Yang, Seungju Han
Our key idea is that an adversarial attack on a biased model that makes decisions based on spurious correlations may generate synthetic bias-conflicting samples, which can then be used as augmented training data for learning a debiased model.
no code implementations • ICCV 2023 • Chanho Ahn, Kikyung Kim, Ji-won Baek, Jongin Lim, Seungju Han
Although recent studies on designing a robust objective function to label noise, known as the robust loss method, have shown promising results for learning with noisy labels, they suffer from the issue of underfitting not only noisy samples but also clean ones, leading to suboptimal model performance.
1 code implementation • ICCV 2023 • Hyundong Jin, Gyeong-hyeon Kim, Chanho Ahn, Eunwoo Kim
The base network learns knowledge of sequential tasks, and the sparsity-inducing hypernetwork generates parameters for each time step for evolving old knowledge.
no code implementations • ICCV 2019 • Chanho Ahn, Eunwoo Kim, Songhwai Oh
To this end, we propose an efficient approach to exploit a compact but accurate model in a backbone architecture for each instance of all tasks.
no code implementations • CVPR 2019 • Eunwoo Kim, Chanho Ahn, Philip H. S. Torr, Songhwai Oh
To this end, we propose a novel network architecture producing multiple networks of different configurations, termed deep virtual networks (DVNs), for different tasks.
no code implementations • CVPR 2018 • Eunwoo Kim, Chanho Ahn, Songhwai Oh
A nested sparse network consists of multiple levels of networks with a different sparsity ratio associated with each level, and higher level networks share parameters with lower level networks to enable stable nested learning.