Search Results for author: Xinghua Qu

Found 3 papers, 1 papers with code

Adversary Agnostic Robust Deep Reinforcement Learning

no code implementations14 Aug 2020 Xinghua Qu, Yew-Soon Ong, Abhishek Gupta, Zhu Sun

Motivated by this finding, we propose a new policy distillation loss with two terms: 1) a prescription gap maximization loss aiming at simultaneously maximizing the likelihood of the action selected by the teacher policy and the entropy over the remaining actions; 2) a corresponding Jacobian regularization loss that minimizes the magnitude of the gradient with respect to the input state.

Adversarial Robustness Atari Games

Subdomain Adaptation with Manifolds Discrepancy Alignment

no code implementations6 May 2020 Pengfei Wei, Yiping Ke, Xinghua Qu, Tze-Yun Leong

Specifically, we propose to use low-dimensional manifold to represent subdomain, and align the local data distribution discrepancy in each manifold across domains.

Transfer Learning

Minimalistic Attacks: How Little it Takes to Fool a Deep Reinforcement Learning Policy

1 code implementation10 Nov 2019 Xinghua Qu, Zhu Sun, Yew-Soon Ong, Abhishek Gupta, Pengfei Wei

Recent studies have revealed that neural network-based policies can be easily fooled by adversarial examples.

Adversarial Attack Atari Games

Cannot find the paper you are looking for? You can Submit a new open access paper.