Search Results for author: Amin Rakhsha

Found 5 papers, 1 papers with code

Maximum Entropy Model Correction in Reinforcement Learning

no code implementations29 Nov 2023 Amin Rakhsha, Mete Kemertas, Mohammad Ghavamzadeh, Amir-Massoud Farahmand

We propose and theoretically analyze an approach for planning with an approximate model in reinforcement learning that can reduce the adverse impact of model error.

Density Estimation reinforcement-learning

Operator Splitting Value Iteration

no code implementations25 Nov 2022 Amin Rakhsha, Andrew Wang, Mohammad Ghavamzadeh, Amir-Massoud Farahmand

We introduce new planning and reinforcement learning algorithms for discounted MDPs that utilize an approximate model of the environment to accelerate the convergence of the value function.

reinforcement-learning Reinforcement Learning (RL)

Reward Poisoning in Reinforcement Learning: Attacks Against Unknown Learners in Unknown Environments

no code implementations16 Feb 2021 Amin Rakhsha, Xuezhou Zhang, Xiaojin Zhu, Adish Singla

We study black-box reward poisoning attacks against reinforcement learning (RL), in which an adversary aims to manipulate the rewards to mislead a sequence of RL agents with unknown algorithms to learn a nefarious policy in an environment unknown to the adversary a priori.

reinforcement-learning Reinforcement Learning (RL)

Policy Teaching in Reinforcement Learning via Environment Poisoning Attacks

no code implementations21 Nov 2020 Amin Rakhsha, Goran Radanovic, Rati Devidze, Xiaojin Zhu, Adish Singla

We provide lower/upper bounds on the attack cost, and instantiate our attacks in two settings: (i) an offline setting where the agent is doing planning in the poisoned environment, and (ii) an online setting where the agent is learning a policy with poisoned feedback.

reinforcement-learning Reinforcement Learning (RL)

Policy Teaching via Environment Poisoning: Training-time Adversarial Attacks against Reinforcement Learning

1 code implementation ICML 2020 Amin Rakhsha, Goran Radanovic, Rati Devidze, Xiaojin Zhu, Adish Singla

We study a security threat to reinforcement learning where an attacker poisons the learning environment to force the agent into executing a target policy chosen by the attacker.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.