1 code implementation • 23 Jan 2024 • Hyungyu Lee, Saehyung Lee, Hyemi Jang, Junsung Park, Ho Bae, Sungroh Yoon
The disparity in accuracy between classes in standard training is amplified during adversarial training, a phenomenon termed the robust fairness problem.
no code implementations • 14 Mar 2023 • Dahuin Jung, Hyungyu Lee, Sungroh Yoon
In particular, in comparison with existing self-supervised learning methods for tabular data, we propose a different corruption method for state and action representations that is robust to diverse distortions.
1 code implementation • 27 Sep 2022 • Saehyung Lee, Hyungyu Lee
Several recent studies have shown that the use of extra in-distribution data can lead to a high level of adversarial robustness.
no code implementations • 20 Mar 2022 • Han Gyel Sun, Hyunjae Ahn, Hyungyu Lee, Injung Kim
In this paper, we propose a new adapter network for adapting a pre-trained deep neural network to a target domain with minimal computation.
no code implementations • 29 Sep 2021 • Saehyung Lee, Hyungyu Lee, Sanghyuk Chun, Sungroh Yoon
Several recent studies have shown that the use of extra in-distribution data can lead to a high level of adversarial robustness.
no code implementations • 11 Aug 2021 • Hyungyu Lee, Myeongwoo Jeong, Chanyoung Kim, Hyungtae Lim, Changgue Park, Sungwon Hwang, Hyun Myung
In this paper, a novel reinforcement learning-based method is proposed to control a tilting multirotor on real-world applications, which is the first attempt to apply reinforcement learning to a tilting multirotor.
1 code implementation • ICLR 2022 • Uiwon Hwang, Heeseung Kim, Dahuin Jung, Hyemi Jang, Hyungyu Lee, Sungroh Yoon
Generative adversarial networks (GANs) with clustered latent spaces can perform conditional generation in a completely unsupervised manner.
1 code implementation • ICLR 2021 • Saehyung Lee, Changhwa Park, Hyungyu Lee, Jihun Yi, Jonghyun Lee, Sungroh Yoon
Herein, we propose a data augmentation method to improve generalization in both adversarial and standard learning by using out-of-distribution (OOD) data that are devoid of the abovementioned issues.
2 code implementations • CVPR 2020 • Saehyung Lee, Hyungyu Lee, Sungroh Yoon
In this paper, we identify Adversarial Feature Overfitting (AFO), which may cause poor adversarially robust generalization, and we show that adversarial training can overshoot the optimal point in terms of robust generalization, leading to AFO in our simple Gaussian model.
no code implementations • 31 Jul 2018 • Ho Bae, Jaehee Jang, Dahuin Jung, Hyemi Jang, Heonseok Ha, Hyungyu Lee, Sungroh Yoon
Furthermore, the privacy of the data involved in model training is also threatened by attacks such as the model-inversion attack, or by dishonest service providers of AI applications.