no code implementations • 11 Dec 2017 • Anay Pattanaik, Zhenyi Tang, Shuijing Liu, Gautham Bommannan, Girish Chowdhary
This paper proposes adversarial attacks for Reinforcement Learning (RL) and then improves the robustness of Deep Reinforcement Learning algorithms (DRL) to parameter uncertainties with the help of these attacks.