2 code implementations • 7 Feb 2022 • Saehyung Lee, Sanghyuk Chun, Sangwon Jung, Sangdoo Yun, Sungroh Yoon
However, in this study, we prove that the existing DC methods can perform worse than the random selection method when task-irrelevant information forms a significant part of the training dataset.
2 code implementations • CVPR 2020 • Saehyung Lee, Hyungyu Lee, Sungroh Yoon
In this paper, we identify Adversarial Feature Overfitting (AFO), which may cause poor adversarially robust generalization, and we show that adversarial training can overshoot the optimal point in terms of robust generalization, leading to AFO in our simple Gaussian model.
1 code implementation • ICLR 2021 • Saehyung Lee, Changhwa Park, Hyungyu Lee, Jihun Yi, Jonghyun Lee, Sungroh Yoon
Herein, we propose a data augmentation method to improve generalization in both adversarial and standard learning by using out-of-distribution (OOD) data that are devoid of the abovementioned issues.
1 code implementation • 27 Sep 2022 • Saehyung Lee, Hyungyu Lee
Several recent studies have shown that the use of extra in-distribution data can lead to a high level of adversarial robustness.
1 code implementation • 23 Jan 2024 • Hyungyu Lee, Saehyung Lee, Hyemi Jang, Junsung Park, Ho Bae, Sungroh Yoon
The disparity in accuracy between classes in standard training is amplified during adversarial training, a phenomenon termed the robust fairness problem.
no code implementations • 29 Sep 2021 • Saehyung Lee, Hyungyu Lee, Sanghyuk Chun, Sungroh Yoon
Several recent studies have shown that the use of extra in-distribution data can lead to a high level of adversarial robustness.
no code implementations • 27 Sep 2018 • Parichay Kapoor, Dongsoo Lee, Byeongwook Kim, Saehyung Lee
We present a non-intrusive quantization technique based on re-training the full precision model, followed by directly optimizing the corresponding binary model.
1 code implementation • 19 Jan 2024 • Yeongtak Oh, Saehyung Lee, Uiwon Hwang, Sungroh Yoon
Large-scale language-vision pre-training models, such as CLIP, have achieved remarkable text-guided image morphing results by leveraging several unconditional generative models.
no code implementations • 14 Feb 2024 • Juhyeon Shin, Jonghyun Lee, Saehyung Lee, MinJun Park, Dongjun Lee, Uiwon Hwang, Sungroh Yoon
In context of Test-time Adaptation(TTA), we propose a regularizer, dubbed Gradient Alignment with Prototype feature (GAP), which alleviates the inappropriate guidance from entropy minimization loss from misclassified pseudo label.
no code implementations • 12 Mar 2024 • Jonghyun Lee, Dahuin Jung, Saehyung Lee, Junsung Park, Juhyeon Shin, Uiwon Hwang, Sungroh Yoon
To mitigate it, TTA methods have utilized the model output's entropy as a confidence metric that aims to determine which samples have a lower likelihood of causing error.