Minimalistic Attacks: How Little it Takes to Fool a Deep Reinforcement Learning Policy

10 Nov 2019 Xinghua Qu Zhu Sun Yew-Soon Ong Abhishek Gupta Pengfei Wei

Recent studies have revealed that neural network-based policies can be easily fooled by adversarial examples. However, while most prior works analyze the effects of perturbing every pixel of every frame assuming white-box policy access, in this paper we take a more restrictive view towards adversary generation - with the goal of unveiling the limits of a model's vulnerability... (read more)

PDF Abstract

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper