Search Results for author: Chanho Ahn

Found 6 papers, 1 papers with code

BiasAdv: Bias-Adversarial Augmentation for Model Debiasing

no code implementations CVPR 2023 Jongin Lim, Youngdong Kim, Byungjai Kim, Chanho Ahn, Jinwoo Shin, Eunho Yang, Seungju Han

Our key idea is that an adversarial attack on a biased model that makes decisions based on spurious correlations may generate synthetic bias-conflicting samples, which can then be used as augmented training data for learning a debiased model.

Adversarial Attack Data Augmentation

Sample-wise Label Confidence Incorporation for Learning with Noisy Labels

no code implementations ICCV 2023 Chanho Ahn, Kikyung Kim, Ji-won Baek, Jongin Lim, Seungju Han

Although recent studies on designing a robust objective function to label noise, known as the robust loss method, have shown promising results for learning with noisy labels, they suffer from the issue of underfitting not only noisy samples but also clean ones, leading to suboptimal model performance.

Learning with noisy labels

Growing a Brain with Sparsity-Inducing Generation for Continual Learning

1 code implementation ICCV 2023 Hyundong Jin, Gyeong-hyeon Kim, Chanho Ahn, Eunwoo Kim

The base network learns knowledge of sequential tasks, and the sparsity-inducing hypernetwork generates parameters for each time step for evolving old knowledge.

Action Recognition Continual Learning +3

Deep Elastic Networks with Model Selection for Multi-Task Learning

no code implementations ICCV 2019 Chanho Ahn, Eunwoo Kim, Songhwai Oh

To this end, we propose an efficient approach to exploit a compact but accurate model in a backbone architecture for each instance of all tasks.

Image Classification Model Selection +1

Deep Virtual Networks for Memory Efficient Inference of Multiple Tasks

no code implementations CVPR 2019 Eunwoo Kim, Chanho Ahn, Philip H. S. Torr, Songhwai Oh

To this end, we propose a novel network architecture producing multiple networks of different configurations, termed deep virtual networks (DVNs), for different tasks.

NestedNet: Learning Nested Sparse Structures in Deep Neural Networks

no code implementations CVPR 2018 Eunwoo Kim, Chanho Ahn, Songhwai Oh

A nested sparse network consists of multiple levels of networks with a different sparsity ratio associated with each level, and higher level networks share parameters with lower level networks to enable stable nested learning.

Knowledge Distillation Scheduling

Cannot find the paper you are looking for? You can Submit a new open access paper.