Safe Exploration
35 papers with code • 0 benchmarks • 0 datasets
Safe Exploration is an approach to collect ground truth data by safely interacting with the environment.
Source: Chance-Constrained Trajectory Optimization for Safe Exploration and Learning of Nonlinear Systems
Benchmarks
These leaderboards are used to track progress in Safe Exploration
Libraries
Use these libraries to find Safe Exploration models and implementationsLatest papers
A comparison of RL-based and PID controllers for 6-DOF swimming robots: hybrid underwater object tracking
We use established methods for vision-based tracking and introduce a centralized DQN controller.
State-Wise Safe Reinforcement Learning With Pixel Observations
In the context of safe exploration, Reinforcement Learning (RL) has long grappled with the challenges of balancing the tradeoff between maximizing rewards and minimizing safety violations, particularly in complex environments with contact-rich or non-smooth dynamics, and when dealing with high-dimensional pixel observations.
Safe and Sample-efficient Reinforcement Learning for Clustered Dynamic Environments
This study proposes a safe and sample-efficient reinforcement learning (RL) framework to address two major challenges in developing applicable RL algorithms: satisfying safety constraints and efficiently learning with limited samples.
Information-Theoretic Safe Exploration with Gaussian Processes
We consider a sequential decision making task where we are not allowed to evaluate parameters that violate an a priori unknown (safety) constraint.
Benefits of Monotonicity in Safe Exploration with Gaussian Processes
We consider the problem of sequentially maximising an unknown function over a set of actions while ensuring that every sampled point has a function value below a given safety threshold.
Atlas: Automate Online Service Configuration in Network Slicing
First, we design a learning-based simulator to reduce the sim-to-real discrepancy, which is accomplished by a new parameter searching method based on Bayesian optimization.
Model-based Safe Deep Reinforcement Learning via a Constrained Proximal Policy Optimization Algorithm
We compare our approach with relevant model-free and model-based approaches in Constrained RL using the challenging Safe Reinforcement Learning benchmark - the Open AI Safety Gym.
Near-Optimal Multi-Agent Learning for Safe Coverage Control
In this paper, we aim to efficiently learn the density to approximately solve the coverage problem while preserving the agents' safety.
Safe Exploration Method for Reinforcement Learning under Existence of Disturbance
We define the safety during learning as satisfaction of the constraint conditions explicitly defined in terms of the state and propose a safe exploration method that uses partial prior knowledge of a controlled object and disturbance.
Toward Safe and Accelerated Deep Reinforcement Learning for Next-Generation Wireless Networks
Nevertheless, several challenges hinder the practical adoption of DRL in commercial networks.