Search Results for author: Saehyung Lee

Found 6 papers, 4 papers with code

Inducing Data Amplification Using Auxiliary Datasets in Adversarial Training

1 code implementation27 Sep 2022 Saehyung Lee, Hyungyu Lee

Several recent studies have shown that the use of extra in-distribution data can lead to a high level of adversarial robustness.

Adversarial Robustness

Dataset Condensation with Contrastive Signals

2 code implementations7 Feb 2022 Saehyung Lee, Sanghyuk Chun, Sangwon Jung, Sangdoo Yun, Sungroh Yoon

However, in this study, we prove that the existing DC methods can perform worse than the random selection method when task-irrelevant information forms a significant part of the training dataset.

Continual Learning Dataset Condensation +1

Biased Multi-Domain Adversarial Training

no code implementations29 Sep 2021 Saehyung Lee, Hyungyu Lee, Sanghyuk Chun, Sungroh Yoon

Several recent studies have shown that the use of extra in-distribution data can lead to a high level of adversarial robustness.

Adversarial Robustness

Removing Undesirable Feature Contributions Using Out-of-Distribution Data

1 code implementation ICLR 2021 Saehyung Lee, Changhwa Park, Hyungyu Lee, Jihun Yi, Jonghyun Lee, Sungroh Yoon

Herein, we propose a data augmentation method to improve generalization in both adversarial and standard learning by using out-of-distribution (OOD) data that are devoid of the abovementioned issues.

Data Augmentation

Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization

2 code implementations CVPR 2020 Saehyung Lee, Hyungyu Lee, Sungroh Yoon

In this paper, we identify Adversarial Feature Overfitting (AFO), which may cause poor adversarially robust generalization, and we show that adversarial training can overshoot the optimal point in terms of robust generalization, leading to AFO in our simple Gaussian model.

Adversarial Robustness Data Augmentation

Computation-Efficient Quantization Method for Deep Neural Networks

no code implementations27 Sep 2018 Parichay Kapoor, Dongsoo Lee, Byeongwook Kim, Saehyung Lee

We present a non-intrusive quantization technique based on re-training the full precision model, followed by directly optimizing the corresponding binary model.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.