Stealing Deep Reinforcement Learning Models for Fun and Profit

9 Jun 2020Kangjie ChenTianwei ZhangXiaofei XieYang Liu

In this paper, we present the first attack methodology to extract black-box Deep Reinforcement Learning (DRL) models only from their actions with the environment. Model extraction attacks against supervised Deep Learning models have been widely studied... (read more)

PDF Abstract

Code


No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.