no code implementations • CVPR 2023 • Jongin Lim, Youngdong Kim, Byungjai Kim, Chanho Ahn, Jinwoo Shin, Eunho Yang, Seungju Han
Our key idea is that an adversarial attack on a biased model that makes decisions based on spurious correlations may generate synthetic bias-conflicting samples, which can then be used as augmented training data for learning a debiased model.
no code implementations • ICCV 2023 • Chanho Ahn, Kikyung Kim, Ji-won Baek, Jongin Lim, Seungju Han
Although recent studies on designing a robust objective function to label noise, known as the robust loss method, have shown promising results for learning with noisy labels, they suffer from the issue of underfitting not only noisy samples but also clean ones, leading to suboptimal model performance.
1 code implementation • CVPR 2022 • Jongin Lim, Sangdoo Yun, Seulki Park, Jin Young Choi
In this paper, we propose Hypergraph-Induced Semantic Tuplet (HIST) loss for deep metric learning that leverages the multilateral semantic relations of multiple samples to multiple classes via hypergraph modeling.
1 code implementation • ICCV 2021 • Seulki Park, Jongin Lim, Younghan Jeon, Jin Young Choi
In this paper, we propose a balancing training method to address problems in imbalanced data learning.
Ranked #46 on
Long-tail Learning
on CIFAR-10-LT (ρ=10)
1 code implementation • 18 Jun 2020 • Jongin Lim, Daeho Um, Hyung Jin Chang, Dae Ung Jo, Jin Young Choi
In contrast to the existing diffusion methods with a transition matrix determined solely by the graph structure, CAD considers both the node features and the graph structure with the design of our class-attentive transition matrix that utilizes a classifier.
1 code implementation • 18 Jan 2019 • Youngmin Ro, Jongwon Choi, Dae Ung Jo, Byeongho Heo, Jongin Lim, Jin Young Choi
Our strategy alleviates the problem of gradient vanishing in low-level layers and robustly trains the low-level layers to fit the ReID dataset, thereby increasing the performance of ReID tasks.