Safe Reinforcement Learning

74 papers with code • 0 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Safe Reinforcement Learning models and implementations
3 papers
52
3 papers
48

Most implemented papers

Log Barriers for Safe Black-box Optimization with Application to Safe Reinforcement Learning

lasgroup/lbsgd-rl 21 Jul 2022

We introduce a general approach for seeking a stationary point in high dimensional non-linear stochastic optimization problems in which maintaining safety during learning is crucial.

Self-Improving Safety Performance of Reinforcement Learning Based Driving with Black-Box Verification Algorithms

data-and-decision-lab/self-improving-RL 29 Oct 2022

In this work, we propose a self-improving artificial intelligence system to enhance the safety performance of reinforcement learning (RL)-based autonomous driving (AD) agents using black-box verification methods.

NLBAC: A Neural Ordinary Differential Equations-based Framework for Stable and Safe Reinforcement Learning

liqunzhao/neural-ordinary-differential-equations-based-lyapunov-barrier-actor-critic-nlbac 23 Jan 2024

Reinforcement learning (RL) excels in applications such as video games and robotics, but ensuring safety and stability remains challenging when using RL to control real-world systems where using model-free algorithms suffering from low sample efficiency might be prohibitive.

Data-Efficient Reinforcement Learning with Probabilistic Model Predictive Control

SimonRennotte/Data-Efficient-Reinforcement-Learning-with-Probabilistic-Model-Predictive-Control 20 Jun 2017

Trial-and-error based reinforcement learning (RL) has seen rapid advancements in recent times, especially with the advent of deep neural networks.

Trial without Error: Towards Safe Reinforcement Learning via Human Intervention

gsastry/human-rl 17 Jul 2017

We formalize human intervention for RL and show how to reduce the human labor required by training a supervised learner to imitate the human's intervention decisions.

Safe Reinforcement Learning via Shielding

DanielLSM/safe-rl-tutorial 29 Aug 2017

In the first one, the shield acts each time the learning agent is about to make a decision and provides a list of safe actions.

Logically-Constrained Reinforcement Learning

grockious/lcrl 24 Jan 2018

With this reward function, the policy synthesis procedure is "constrained" by the given specification.

A Lyapunov-based Approach to Safe Reinforcement Learning

jemaw/gym-safety NeurIPS 2018

In many real-world reinforcement learning (RL) problems, besides optimizing the main objective function, an agent must concurrently avoid violating a number of constraints.

Reward Constrained Policy Optimization

HaozheJasper/CBRL_KDD22 ICLR 2019

Solving tasks in Reinforcement Learning is no easy feat.

Better Safe than Sorry: Evidence Accumulation Allows for Safe Reinforcement Learning

susumuota/gym-modeestimation 24 Sep 2018

The agent makes no decision by default, and the burden of proof to make a decision falls on the policy to accrue evidence strongly in favor of a single decision.