Safe Reinforcement Learning
89 papers with code • 0 benchmarks • 1 datasets
Benchmarks
These leaderboards are used to track progress in Safe Reinforcement Learning
Libraries
Use these libraries to find Safe Reinforcement Learning models and implementationsMost implemented papers
Constrained Policy Optimization
For many applications of reinforcement learning it can be more convenient to specify both a reward function and constraints, rather than trying to design behavior through the reward function.
Safe Learning in Robotics: From Learning-Based Control to Safe Reinforcement Learning
The last half-decade has seen a steep rise in the number of contributions on safe learning methods for real-world robotic deployments from both the control and reinforcement learning communities.
Feasible Actor-Critic: Constrained Reinforcement Learning for Ensuring Statewise Safety
The safety constraints commonly used by existing safe reinforcement learning (RL) methods are defined only on expectation of initial states, but allow each certain state to be unsafe, which is unsatisfying for real-world safety-critical tasks.
Constrained Update Projection Approach to Safe Policy Optimization
Compared to previous safe RL methods, CUP enjoys the benefits of 1) CUP generalizes the surrogate functions to generalized advantage estimator (GAE), leading to strong empirical performance.
Datasets and Benchmarks for Offline Safe Reinforcement Learning
This paper presents a comprehensive benchmarking suite tailored to offline safe reinforcement learning (RL) challenges, aiming to foster progress in the development and evaluation of safe learning algorithms in both the training and deployment phases.
Balance Reward and Safety Optimization for Safe Reinforcement Learning: A Perspective of Gradient Manipulation
Ensuring the safety of Reinforcement Learning (RL) is crucial for its deployment in real-world applications.
Safe Reinforcement Learning with Scene Decomposition for Navigating Complex Urban Environments
Navigating urban environments represents a complex task for automated vehicles.
Recovery RL: Safe Reinforcement Learning with Learned Recovery Zones
Safety remains a central obstacle preventing widespread use of RL in the real world: learning new tasks in uncertain environments requires extensive exploration, but safety requires limiting exploration.
MetaDrive: Composing Diverse Driving Scenarios for Generalizable Reinforcement Learning
Based on MetaDrive, we construct a variety of RL tasks and baselines in both single-agent and multi-agent settings, including benchmarking generalizability across unseen scenes, safe exploration, and learning multi-agent traffic.
Constrained Variational Policy Optimization for Safe Reinforcement Learning
Safe reinforcement learning (RL) aims to learn policies that satisfy certain constraints before deploying them to safety-critical applications.