1 code implementation • 27 Sep 2022 • Saehyung Lee, Hyungyu Lee
Several recent studies have shown that the use of extra in-distribution data can lead to a high level of adversarial robustness.
2 code implementations • 7 Feb 2022 • Saehyung Lee, Sanghyuk Chun, Sangwon Jung, Sangdoo Yun, Sungroh Yoon
However, in this study, we prove that the existing DC methods can perform worse than the random selection method when task-irrelevant information forms a significant part of the training dataset.
no code implementations • 29 Sep 2021 • Saehyung Lee, Hyungyu Lee, Sanghyuk Chun, Sungroh Yoon
Several recent studies have shown that the use of extra in-distribution data can lead to a high level of adversarial robustness.
1 code implementation • ICLR 2021 • Saehyung Lee, Changhwa Park, Hyungyu Lee, Jihun Yi, Jonghyun Lee, Sungroh Yoon
Herein, we propose a data augmentation method to improve generalization in both adversarial and standard learning by using out-of-distribution (OOD) data that are devoid of the abovementioned issues.
2 code implementations • CVPR 2020 • Saehyung Lee, Hyungyu Lee, Sungroh Yoon
In this paper, we identify Adversarial Feature Overfitting (AFO), which may cause poor adversarially robust generalization, and we show that adversarial training can overshoot the optimal point in terms of robust generalization, leading to AFO in our simple Gaussian model.
no code implementations • 27 Sep 2018 • Parichay Kapoor, Dongsoo Lee, Byeongwook Kim, Saehyung Lee
We present a non-intrusive quantization technique based on re-training the full precision model, followed by directly optimizing the corresponding binary model.