Search Results for author: Hyungyu Lee

Found 8 papers, 4 papers with code

Inducing Data Amplification Using Auxiliary Datasets in Adversarial Training

1 code implementation27 Sep 2022 Saehyung Lee, Hyungyu Lee

Several recent studies have shown that the use of extra in-distribution data can lead to a high level of adversarial robustness.

Adversarial Robustness

Unidirectional Thin Adapter for Efficient Adaptation of Deep Neural Networks

no code implementations20 Mar 2022 Han Gyel Sun, Hyunjae Ahn, Hyungyu Lee, Injung Kim

In this paper, we propose a new adapter network for adapting a pre-trained deep neural network to a target domain with minimal computation.

Biased Multi-Domain Adversarial Training

no code implementations29 Sep 2021 Saehyung Lee, Hyungyu Lee, Sanghyuk Chun, Sungroh Yoon

Several recent studies have shown that the use of extra in-distribution data can lead to a high level of adversarial robustness.

Adversarial Robustness

Low-level Pose Control of Tilting Multirotor for Wall Perching Tasks Using Reinforcement Learning

no code implementations11 Aug 2021 Hyungyu Lee, Myeongwoo Jeong, Chanyoung Kim, Hyungtae Lim, Changgue Park, Sungwon Hwang, Hyun Myung

In this paper, a novel reinforcement learning-based method is proposed to control a tilting multirotor on real-world applications, which is the first attempt to apply reinforcement learning to a tilting multirotor.


Stein Latent Optimization for Generative Adversarial Networks

1 code implementation ICLR 2022 Uiwon Hwang, Heeseung Kim, Dahuin Jung, Hyemi Jang, Hyungyu Lee, Sungroh Yoon

Generative adversarial networks (GANs) with clustered latent spaces can perform conditional generation in a completely unsupervised manner.

Removing Undesirable Feature Contributions Using Out-of-Distribution Data

1 code implementation ICLR 2021 Saehyung Lee, Changhwa Park, Hyungyu Lee, Jihun Yi, Jonghyun Lee, Sungroh Yoon

Herein, we propose a data augmentation method to improve generalization in both adversarial and standard learning by using out-of-distribution (OOD) data that are devoid of the abovementioned issues.

Data Augmentation

Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization

2 code implementations CVPR 2020 Saehyung Lee, Hyungyu Lee, Sungroh Yoon

In this paper, we identify Adversarial Feature Overfitting (AFO), which may cause poor adversarially robust generalization, and we show that adversarial training can overshoot the optimal point in terms of robust generalization, leading to AFO in our simple Gaussian model.

Adversarial Robustness Data Augmentation

Security and Privacy Issues in Deep Learning

no code implementations31 Jul 2018 Ho Bae, Jaehee Jang, Dahuin Jung, Hyemi Jang, Heonseok Ha, Hyungyu Lee, Sungroh Yoon

Furthermore, the privacy of the data involved in model training is also threatened by attacks such as the model-inversion attack, or by dishonest service providers of AI applications.

Cannot find the paper you are looking for? You can Submit a new open access paper.