Search Results for author: Saehyung Lee

Found 10 papers, 6 papers with code

Entropy is not Enough for Test-Time Adaptation: From the Perspective of Disentangled Factors

no code implementations12 Mar 2024 Jonghyun Lee, Dahuin Jung, Saehyung Lee, Junsung Park, Juhyeon Shin, Uiwon Hwang, Sungroh Yoon

To mitigate it, TTA methods have utilized the model output's entropy as a confidence metric that aims to determine which samples have a lower likelihood of causing error.

Object Pseudo Label +1

Gradient Alignment with Prototype Feature for Fully Test-time Adaptation

no code implementations14 Feb 2024 Juhyeon Shin, Jonghyun Lee, Saehyung Lee, MinJun Park, Dongjun Lee, Uiwon Hwang, Sungroh Yoon

In context of Test-time Adaptation(TTA), we propose a regularizer, dubbed Gradient Alignment with Prototype feature (GAP), which alleviates the inappropriate guidance from entropy minimization loss from misclassified pseudo label.

Pseudo Label Test-time Adaptation

DAFA: Distance-Aware Fair Adversarial Training

1 code implementation23 Jan 2024 Hyungyu Lee, Saehyung Lee, Hyemi Jang, Junsung Park, Ho Bae, Sungroh Yoon

The disparity in accuracy between classes in standard training is amplified during adversarial training, a phenomenon termed the robust fairness problem.

Fairness

On mitigating stability-plasticity dilemma in CLIP-guided image morphing via geodesic distillation loss

1 code implementation19 Jan 2024 Yeongtak Oh, Saehyung Lee, Uiwon Hwang, Sungroh Yoon

Large-scale language-vision pre-training models, such as CLIP, have achieved remarkable text-guided image morphing results by leveraging several unconditional generative models.

Image Morphing

Inducing Data Amplification Using Auxiliary Datasets in Adversarial Training

1 code implementation27 Sep 2022 Saehyung Lee, Hyungyu Lee

Several recent studies have shown that the use of extra in-distribution data can lead to a high level of adversarial robustness.

Adversarial Robustness

Dataset Condensation with Contrastive Signals

2 code implementations7 Feb 2022 Saehyung Lee, Sanghyuk Chun, Sangwon Jung, Sangdoo Yun, Sungroh Yoon

However, in this study, we prove that the existing DC methods can perform worse than the random selection method when task-irrelevant information forms a significant part of the training dataset.

Attribute Continual Learning +2

Biased Multi-Domain Adversarial Training

no code implementations29 Sep 2021 Saehyung Lee, Hyungyu Lee, Sanghyuk Chun, Sungroh Yoon

Several recent studies have shown that the use of extra in-distribution data can lead to a high level of adversarial robustness.

Adversarial Robustness

Removing Undesirable Feature Contributions Using Out-of-Distribution Data

1 code implementation ICLR 2021 Saehyung Lee, Changhwa Park, Hyungyu Lee, Jihun Yi, Jonghyun Lee, Sungroh Yoon

Herein, we propose a data augmentation method to improve generalization in both adversarial and standard learning by using out-of-distribution (OOD) data that are devoid of the abovementioned issues.

Data Augmentation

Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization

2 code implementations CVPR 2020 Saehyung Lee, Hyungyu Lee, Sungroh Yoon

In this paper, we identify Adversarial Feature Overfitting (AFO), which may cause poor adversarially robust generalization, and we show that adversarial training can overshoot the optimal point in terms of robust generalization, leading to AFO in our simple Gaussian model.

Adversarial Robustness Data Augmentation

Computation-Efficient Quantization Method for Deep Neural Networks

no code implementations27 Sep 2018 Parichay Kapoor, Dongsoo Lee, Byeongwook Kim, Saehyung Lee

We present a non-intrusive quantization technique based on re-training the full precision model, followed by directly optimizing the corresponding binary model.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.