no code implementations • 6 Jun 2024 • Omar G. Younis, Luca Corinzia, Ioannis N. Athanasiadis, Andreas Krause, Joachim M. Buhmann, Matteo Turchetta
Crop breeding is crucial in improving agricultural productivity while potentially decreasing land usage, greenhouse gas emissions, and water consumption.
1 code implementation • 9 Feb 2024 • Manish Prajapat, Johannes Köhler, Matteo Turchetta, Andreas Krause, Melanie N. Zeilinger
Based on this framework we propose an efficient algorithm, SageMPC, SAfe Guaranteed Exploration using Model Predictive Control.
1 code implementation • 12 Oct 2022 • Manish Prajapat, Matteo Turchetta, Melanie N. Zeilinger, Andreas Krause
In this paper, we aim to efficiently learn the density to approximately solve the coverage problem while preserving the agents' safety.
1 code implementation • 24 Jan 2022 • Bhavya Sukhija, Matteo Turchetta, David Lindner, Andreas Krause, Sebastian Trimpe, Dominik Baumann
Learning optimal control policies directly on physical systems is challenging since even a single failure can lead to costly hardware damage.
1 code implementation • 27 May 2021 • Dominik Baumann, Alonso Marco, Matteo Turchetta, Sebastian Trimpe
When learning policies for robotic systems from data, safety is a major concern, as violation of safety constraints may cause hardware damage.
1 code implementation • NeurIPS 2021 • David Lindner, Matteo Turchetta, Sebastian Tschiatschek, Kamil Ciosek, Andreas Krause
For many reinforcement learning (RL) applications, specifying a reward is difficult.
no code implementations • 19 Jan 2021 • Christopher König, Matteo Turchetta, John Lygeros, Alisa Rupenyan, Andreas Krause
Thus, our approach builds on GoOSE, an algorithm for safe and sample-efficient Bayesian optimization.
1 code implementation • NeurIPS 2020 • Matteo Turchetta, Andrey Kolobov, Shital Shah, Andreas Krause, Alekh Agarwal
In safety-critical applications, autonomous agents may need to learn in an environment where mistakes can be very costly.
no code implementations • NeurIPS 2019 • Matteo Turchetta, Felix Berkenkamp, Andreas Krause
Existing algorithms for this problem learn about the safety of all decisions to ensure convergence.
no code implementations • 29 Oct 2019 • Matteo Turchetta, Andreas Krause, Sebastian Trimpe
In reinforcement learning (RL), an autonomous agent learns to perform complex tasks by maximizing an exogenous reward signal while interacting with its environment.
no code implementations • 2 Jul 2019 • Erik Daxberger, Anastasia Makarova, Matteo Turchetta, Andreas Krause
However, few methods exist for mixed-variable domains and none of them can handle discrete constraints that arise in many real-world applications.
1 code implementation • 27 Jun 2019 • Torsten Koller, Felix Berkenkamp, Matteo Turchetta, Joschka Boedecker, Andreas Krause
We evaluate the resulting algorithm to safely explore the dynamics of an inverted pendulum and to solve a reinforcement learning task on a cart-pole system with safety constraints.
1 code implementation • 22 Mar 2018 • Torsten Koller, Felix Berkenkamp, Matteo Turchetta, Andreas Krause
However, these methods typically do not provide any safety guarantees, which prevents their use in safety-critical, real-world applications.
1 code implementation • NeurIPS 2017 • Felix Berkenkamp, Matteo Turchetta, Angela P. Schoellig, Andreas Krause
Reinforcement learning is a powerful paradigm for learning optimal policies from experimental data.
Model-based Reinforcement Learning reinforcement-learning +2
1 code implementation • NeurIPS 2016 • Matteo Turchetta, Felix Berkenkamp, Andreas Krause
We define safety in terms of an, a priori unknown, safety constraint that depends on states and actions.