Search Results for author: Jongin Lim

Found 6 papers, 4 papers with code

BiasAdv: Bias-Adversarial Augmentation for Model Debiasing

no code implementations CVPR 2023 Jongin Lim, Youngdong Kim, Byungjai Kim, Chanho Ahn, Jinwoo Shin, Eunho Yang, Seungju Han

Our key idea is that an adversarial attack on a biased model that makes decisions based on spurious correlations may generate synthetic bias-conflicting samples, which can then be used as augmented training data for learning a debiased model.

Adversarial Attack Data Augmentation

Sample-wise Label Confidence Incorporation for Learning with Noisy Labels

no code implementations ICCV 2023 Chanho Ahn, Kikyung Kim, Ji-won Baek, Jongin Lim, Seungju Han

Although recent studies on designing a robust objective function to label noise, known as the robust loss method, have shown promising results for learning with noisy labels, they suffer from the issue of underfitting not only noisy samples but also clean ones, leading to suboptimal model performance.

Learning with noisy labels

Hypergraph-Induced Semantic Tuplet Loss for Deep Metric Learning

1 code implementation CVPR 2022 Jongin Lim, Sangdoo Yun, Seulki Park, Jin Young Choi

In this paper, we propose Hypergraph-Induced Semantic Tuplet (HIST) loss for deep metric learning that leverages the multilateral semantic relations of multiple samples to multiple classes via hypergraph modeling.

Metric Learning Node Classification

Class-Attentive Diffusion Network for Semi-Supervised Classification

1 code implementation18 Jun 2020 Jongin Lim, Daeho Um, Hyung Jin Chang, Dae Ung Jo, Jin Young Choi

In contrast to the existing diffusion methods with a transition matrix determined solely by the graph structure, CAD considers both the node features and the graph structure with the design of our class-attentive transition matrix that utilizes a classifier.

Classification General Classification

Backbone Can Not be Trained at Once: Rolling Back to Pre-trained Network for Person Re-Identification

1 code implementation18 Jan 2019 Youngmin Ro, Jongwon Choi, Dae Ung Jo, Byeongho Heo, Jongin Lim, Jin Young Choi

Our strategy alleviates the problem of gradient vanishing in low-level layers and robustly trains the low-level layers to fit the ReID dataset, thereby increasing the performance of ReID tasks.

Person Re-Identification Pose Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.