Search Results for author: Roman Belaire

Found 2 papers, 1 papers with code

Probabilistic Perspectives on Error Minimization in Adversarial Reinforcement Learning

no code implementations7 Jun 2024 Roman Belaire, Arunesh Sinha, Pradeep Varakantham

Deep Reinforcement Learning (DRL) policies are highly susceptible to adversarial noise in observations, which poses significant risks in safety-critical scenarios.

counterfactual reinforcement-learning +1

Regret-Based Defense in Adversarial Reinforcement Learning

1 code implementation14 Feb 2023 Roman Belaire, Pradeep Varakantham, Thanh Nguyen, David Lo

We demonstrate that our approaches provide a significant improvement in performance across a wide variety of benchmarks against leading approaches for robust Deep RL.

reinforcement-learning Reinforcement Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.