Search Results for author: Seungyong Moon

Found 7 papers, 6 papers with code

Rethinking Value Function Learning for Generalization in Reinforcement Learning

1 code implementation18 Oct 2022 Seungyong Moon, JunYeong Lee, Hyun Oh Song

Our work focuses on training RL agents on multiple visually diverse environments to improve observational generalization performance.

reinforcement-learning Reinforcement Learning (RL)

Query-Efficient and Scalable Black-Box Adversarial Attacks on Discrete Sequential Data via Bayesian Optimization

1 code implementation17 Jun 2022 Deokjae Lee, Seungyong Moon, Junhyeok Lee, Hyun Oh Song

We focus on the problem of adversarial attacks against models on discrete sequential data in the black-box setting where the attacker aims to craft adversarial examples with limited query access to the victim model.

Bayesian Optimization

Preemptive Image Robustification for Protecting Users against Man-in-the-Middle Adversarial Attacks

1 code implementation10 Dec 2021 Seungyong Moon, Gaon An, Hyun Oh Song

However, the vulnerability of neural networks against adversarial attacks poses a serious threat to the people affected by these systems.

Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble

4 code implementations NeurIPS 2021 Gaon An, Seungyong Moon, Jang-Hyun Kim, Hyun Oh Song

However, prior methods typically require accurate estimation of the behavior policy or sampling from OOD data points, which themselves can be a non-trivial problem.

Adroid door-cloned Adroid door-human +18

Exploiting Safe Spots in Neural Networks for Preemptive Robustness and Out-of-Distribution Detection

no code implementations1 Jan 2021 Seungyong Moon, Gaon An, Hyun Oh Song

Recent advances on adversarial defense mainly focus on improving the classifier’s robustness against adversarially perturbed inputs.

Adversarial Defense Out-of-Distribution Detection

Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial Optimization

1 code implementation16 May 2019 Seungyong Moon, Gaon An, Hyun Oh Song

Solving for adversarial examples with projected gradient descent has been demonstrated to be highly effective in fooling the neural network based classifiers.

Combinatorial Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.