Safe Reinforcement Learning
24 papers with code • 0 benchmarks • 1 datasets
The last half-decade has seen a steep rise in the number of contributions on safe learning methods for real-world robotic deployments from both the control and reinforcement learning communities.
End-to-End Safe Reinforcement Learning through Barrier Functions for Safety-Critical Continuous Control Tasks
Reinforcement Learning (RL) algorithms have found limited success beyond simulated applications, and one main reason is the absence of safety guarantees during the learning process.
Trial-and-error based reinforcement learning (RL) has seen rapid advancements in recent times, especially with the advent of deep neural networks.
We propose a model-based approach to enable RL agents to effectively explore the environment with unknown system dynamics and environment constraints given a significantly small number of violation budgets.
Safe reinforcement learning has been a promising approach for optimizing the policy of an agent that operates in safety-critical applications.
We formalize human intervention for RL and show how to reduce the human labor required by training a supervised learner to imitate the human's intervention decisions.
Navigating urban environments represents a complex task for automated vehicles.
Reinforcement Learning for Temporal Logic Control Synthesis with Probabilistic Satisfaction Guarantees
Reinforcement Learning (RL) has emerged as an efficient method of choice for solving complex sequential decision making problems in automatic control, computer science, economics, and biology.
This probability (certificate) is also calculated in parallel with policy learning when the state space of the MDP is finite: as such, the RL algorithm produces a policy that is certified with respect to the property.