Search Results for author: Daehwan Kim

Found 6 papers, 2 papers with code

Proxy Anchor-based Unsupervised Learning for Continuous Generalized Category Discovery

1 code implementation ICCV 2023 Hyungmin Kim, Sungho Suh, Daehwan Kim, Daun Jeong, Hansang Cho, Junmo Kim

Existing methods for novel category discovery are limited by their reliance on labeled datasets and prior knowledge about the number of novel categories and the proportion of novel samples in the batch.

Class Incremental Learning Incremental Learning +1

SplitNet: Learnable Clean-Noisy Label Splitting for Learning with Noisy Labels

no code implementations20 Nov 2022 Daehwan Kim, Kwangrok Ryoo, Hansang Cho, Seungryong Kim

To address this, some methods were proposed to automatically split clean and noisy labels, and learn a semi-supervised learner in a Learning with Noisy Labels (LNL) framework.

Learning with noisy labels

AI-KD: Adversarial learning and Implicit regularization for self-Knowledge Distillation

no code implementations20 Nov 2022 Hyungmin Kim, Sungho Suh, SungHyun Baek, Daehwan Kim, Daun Jeong, Hansang Cho, Junmo Kim

Our model not only distills the deterministic and progressive knowledge which are from the pre-trained and previous epoch predictive probabilities but also transfers the knowledge of the deterministic predictive distributions using adversarial learning.

Self-Knowledge Distillation

ConMatch: Semi-Supervised Learning with Confidence-Guided Consistency Regularization

1 code implementation18 Aug 2022 Jiwon Kim, Youngjo Min, Daehwan Kim, Gyuseong Lee, Junyoung Seo, Kwangrok Ryoo, Seungryong Kim

We present a novel semi-supervised learning framework that intelligently leverages the consistency regularization between the model's predictions from two strongly-augmented views of an image, weighted by a confidence of pseudo-label, dubbed ConMatch.

Pseudo Label

Semi-Supervised Learning of Semantic Correspondence with Pseudo-Labels

no code implementations CVPR 2022 Jiwon Kim, Kwangrok Ryoo, Junyoung Seo, Gyuseong Lee, Daehwan Kim, Hansang Cho, Seungryong Kim

In this paper, we present a simple, but effective solution for semantic correspondence that learns the networks in a semi-supervised manner by supplementing few ground-truth correspondences via utilization of a large amount of confident correspondences as pseudo-labels, called SemiMatch.

Data Augmentation Semantic correspondence +1

AggMatch: Aggregating Pseudo Labels for Semi-Supervised Learning

no code implementations25 Jan 2022 Jiwon Kim, Kwangrok Ryoo, Gyuseong Lee, Seokju Cho, Junyoung Seo, Daehwan Kim, Hansang Cho, Seungryong Kim

In this paper, we address this limitation with a novel SSL framework for aggregating pseudo labels, called AggMatch, which refines initial pseudo labels by using different confident instances.

Pseudo Label

Cannot find the paper you are looking for? You can Submit a new open access paper.