We present a suite of reinforcement learning environments illustrating various safety properties of intelligent agents.
However, these methods typically do not provide any safety guarantees, which prevents their use in safety-critical, real-world applications.
We define safety in terms of an, a priori unknown, safety constraint that depends on states and actions.
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society.
We address the problem of deploying a reinforcement learning (RL) agent on a physical system such as a datacenter cooling unit or robot, where critical constraints must never be violated.