Search Results for author: Sebastian Blaes

Found 9 papers, 3 papers with code

Mind the Uncertainty: Risk-Aware and Actively Exploring Model-Based Reinforcement Learning

no code implementations11 Sep 2023 Marin Vlastelica, Sebastian Blaes, Cristina Pineri, Georg Martius

We introduce a simple but effective method for managing risk in model-based reinforcement learning with trajectory sampling that involves probabilistic safety constraints and balancing of optimism in the face of epistemic uncertainty and pessimism in the face of aleatoric uncertainty of an ensemble of stochastic neural networks. Various experiments indicate that the separation of uncertainties is essential to performing well with data-driven MPC approaches in uncertain and safety-critical control environments.

Model-based Reinforcement Learning reinforcement-learning

Benchmarking Offline Reinforcement Learning on Real-Robot Hardware

2 code implementations28 Jul 2023 Nico Gürtler, Sebastian Blaes, Pavel Kolev, Felix Widmaier, Manuel Wüthrich, Stefan Bauer, Bernhard Schölkopf, Georg Martius

To coordinate the efforts of the research community toward tackling this problem, we propose a benchmark including: i) a large collection of data for offline learning from a dexterous manipulation platform on two tasks, obtained with capable RL agents trained in simulation; ii) the option to execute learned policies on a real-world robotic system and a simulation for efficient debugging.

Benchmarking reinforcement-learning

Curious Exploration via Structured World Models Yields Zero-Shot Object Manipulation

no code implementations22 Jun 2022 Cansu Sancaktar, Sebastian Blaes, Georg Martius

It has been a long-standing dream to design artificial agents that explore their environment efficiently via intrinsic motivation, similar to how children perform curious free play.

Efficient Exploration Object +2

Control What You Can: Intrinsically Motivated Task-Planning Agent

1 code implementation NeurIPS 2019 Sebastian Blaes, Marin Vlastelica Pogančić, Jia-Jie Zhu, Georg Martius

We present a novel intrinsically motivated agent that learns how to control the environment in the fastest possible manner by optimizing learning progress.

Cannot find the paper you are looking for? You can Submit a new open access paper.