Search Results for author: Byeonghu Na

Found 10 papers, 8 papers with code

Unknown Domain Inconsistency Minimization for Domain Generalization

no code implementations12 Mar 2024 Seungjae Shin, HeeSun Bae, Byeonghu Na, Yoon-Yeong Kim, Il-Chul Moon

In particular, by aligning the loss landscape acquired in the source domain to the loss landscape of perturbed domains, we expect to achieve generalization grounded on these flat minima for the unknown domains.

Domain Generalization

Dirichlet-based Per-Sample Weighting by Transition Matrix for Noisy Label Learning

1 code implementation5 Mar 2024 HeeSun Bae, Seungjae Shin, Byeonghu Na, Il-Chul Moon

We propose good utilization of the transition matrix is crucial and suggest a new utilization method based on resampling, coined RENT.

Learning with noisy labels

Training Unbiased Diffusion Models From Biased Dataset

1 code implementation2 Mar 2024 Yeongmin Kim, Byeonghu Na, Minsang Park, JoonHo Jang, Dongjun Kim, Wanmo Kang, Il-Chul Moon

While directly applying it to score-matching is intractable, we discover that using the time-dependent density ratio both for reweighting and score correction can lead to a tractable form of the objective function to regenerate the unbiased data density.

Label-Noise Robust Diffusion Models

1 code implementation27 Feb 2024 Byeonghu Na, Yeongmin Kim, HeeSun Bae, Jung Hyun Lee, Se Jung Kwon, Wanmo Kang, Il-Chul Moon

This paper proposes Transition-aware weighted Denoising Score Matching (TDSM) for training conditional diffusion models with noisy labels, which is the first study in the line of diffusion models.

Denoising

SAAL: Sharpness-Aware Active Learning

1 code implementation Proceedings of the 40th International Conference on Machine Learning 2023 Yoon-Yeong Kim, Youngjae Cho, JoonHo Jang, Byeonghu Na, Yeongmin Kim, Kyungwoo Song, Wanmo Kang, Il-Chul Moon

Specifically, our proposed method, Sharpness-Aware Active Learning (SAAL), constructs its acquisition function by selecting unlabeled instances whose perturbed loss becomes maximum.

Active Learning Image Classification +3

Unknown-Aware Domain Adversarial Learning for Open-Set Domain Adaptation

1 code implementation15 Jun 2022 JoonHo Jang, Byeonghu Na, DongHyeok Shin, Mingi Ji, Kyungwoo Song, Il-Chul Moon

Therefore, we propose Unknown-Aware Domain Adversarial Learning (UADAL), which $\textit{aligns}$ the source and the target-$\textit{known}$ distribution while simultaneously $\textit{segregating}$ the target-$\textit{unknown}$ distribution in the feature alignment procedure.

Domain Adaptation

Maximum Likelihood Training of Implicit Nonlinear Diffusion Models

1 code implementation27 May 2022 Dongjun Kim, Byeonghu Na, Se Jung Kwon, Dongsoo Lee, Wanmo Kang, Il-Chul Moon

Whereas diverse variations of diffusion models exist, extending the linear diffusion into a nonlinear diffusion process is investigated by very few works.

Image Generation

Multi-modal Text Recognition Networks: Interactive Enhancements between Visual and Semantic Features

2 code implementations30 Nov 2021 Byeonghu Na, Yoonsik Kim, Sungrae Park

Furthermore, MATRN stimulates combining semantic features into visual features by hiding visual clues related to the character in the training phase.

Scene Text Recognition

Maximum Likelihood Training of Parametrized Diffusion Model

no code implementations29 Sep 2021 Dongjun Kim, Byeonghu Na, Se Jung Kwon, Dongsoo Lee, Wanmo Kang, Il-Chul Moon

Specifically, PDM utilizes the flow to non-linearly transform a data variable into a latent variable, and PDM applies the diffusion process to the transformed latent distribution with the linear diffusing mechanism.

Image Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.