Search Results for author: Felix Berkenkamp

Found 32 papers, 15 papers with code

Safe Controller Optimization for Quadrotors with Gaussian Processes

3 code implementations3 Sep 2015 Felix Berkenkamp, Angela P. Schoellig, Andreas Krause

One of the most fundamental problems when designing controllers for dynamic systems is the tuning of the controller parameters.

Robotics

The Lyapunov Neural Network: Adaptive Stability Certification for Safe Learning of Dynamical Systems

1 code implementation2 Aug 2018 Spencer M. Richards, Felix Berkenkamp, Andreas Krause

We demonstrate our method by learning the safe region of attraction for a simulated inverted pendulum.

Bayesian Optimization with Safety Constraints: Safe and Automatic Parameter Tuning in Robotics

3 code implementations14 Feb 2016 Felix Berkenkamp, Andreas Krause, Angela P. Schoellig

While an initial guess for the parameters may be obtained from dynamic models of the robot, parameters are usually tuned manually on the real system to achieve the best performance.

Bayesian Optimization

Learning-based Model Predictive Control for Safe Exploration

1 code implementation22 Mar 2018 Torsten Koller, Felix Berkenkamp, Matteo Turchetta, Andreas Krause

However, these methods typically do not provide any safety guarantees, which prevents their use in safety-critical, real-world applications.

Model Predictive Control Safe Exploration

Learning-based Model Predictive Control for Safe Exploration and Reinforcement Learning

1 code implementation27 Jun 2019 Torsten Koller, Felix Berkenkamp, Matteo Turchetta, Joschka Boedecker, Andreas Krause

We evaluate the resulting algorithm to safely explore the dynamics of an inverted pendulum and to solve a reinforcement learning task on a cart-pole system with safety constraints.

Model Predictive Control reinforcement-learning +2

Efficient Model-Based Reinforcement Learning through Optimistic Policy Search and Planning

1 code implementation NeurIPS 2020 Sebastian Curi, Felix Berkenkamp, Andreas Krause

Based on this theoretical foundation, we show how optimistic exploration can be easily combined with state-of-the-art reinforcement learning algorithms and different probabilistic models.

Model-based Reinforcement Learning reinforcement-learning +1

Model-Based Uncertainty in Value Functions

1 code implementation24 Feb 2023 Carlos E. Luis, Alessandro G. Bottero, Julia Vinogradska, Felix Berkenkamp, Jan Peters

We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.

Continuous Control Model-based Reinforcement Learning +3

Structured Variational Inference in Unstable Gaussian Process State Space Models

1 code implementation16 Jul 2019 Silvan Melchior, Sebastian Curi, Felix Berkenkamp, Andreas Krause

Finally, we show experimentally that our learning algorithm performs well in stable and unstable real systems with hidden states.

Gaussian Processes Variational Inference

Verifying Controllers Against Adversarial Examples with Bayesian Optimization

1 code implementation23 Feb 2018 Shromona Ghosh, Felix Berkenkamp, Gireeja Ranade, Shaz Qadeer, Ashish Kapoor

We specify safety constraints using logic and exploit structure in the problem in order to test the system for adversarial counter examples that violate the safety specifications.

Bayesian Optimization reinforcement-learning +1

Information-Theoretic Safe Exploration with Gaussian Processes

1 code implementation9 Dec 2022 Alessandro G. Bottero, Carlos E. Luis, Julia Vinogradska, Felix Berkenkamp, Jan Peters

We consider a sequential decision making task where we are not allowed to evaluate parameters that violate an a priori unknown (safety) constraint.

Decision Making Gaussian Processes +1

Learning to Compensate Photovoltaic Power Fluctuations from Images of the Sky by Imitating an Optimal Policy

no code implementations13 Nov 2018 Robin Spiess, Felix Berkenkamp, Jan Poland, Andreas Krause

In this paper, we present a deep learning approach that uses images of the sky to compensate power fluctuations predictively and reduces battery stress.

Imitation Learning

No-Regret Bayesian Optimization with Unknown Hyperparameters

no code implementations10 Jan 2019 Felix Berkenkamp, Angela P. Schoellig, Andreas Krause

In this paper, we present the first BO algorithm that is provably no-regret and converges to the optimum without knowledge of the hyperparameters.

Bayesian Optimization

Probabilistic Meta-Learning for Bayesian Optimization

no code implementations1 Jan 2021 Felix Berkenkamp, Anna Eivazi, Lukas Grossberger, Kathrin Skubch, Jonathan Spitz, Christian Daniel, Stefan Falkner

Transfer and meta-learning algorithms leverage evaluations on related tasks in order to significantly speed up learning or optimization on a new problem.

Bayesian Optimization Meta-Learning +1

On-Policy Model Errors in Reinforcement Learning

no code implementations ICLR 2022 Lukas P. Fröhlich, Maksym Lefarov, Melanie N. Zeilinger, Felix Berkenkamp

In contrast, model-based methods can use the learned model to generate new data, but model errors and bias can render learning unstable or suboptimal.

reinforcement-learning Reinforcement Learning (RL)

Value-Distributional Model-Based Reinforcement Learning

no code implementations12 Aug 2023 Carlos E. Luis, Alessandro G. Bottero, Julia Vinogradska, Felix Berkenkamp, Jan Peters

We study the problem from a model-based Bayesian reinforcement learning perspective, where the goal is to learn the posterior distribution over value functions induced by parameter (epistemic) uncertainty of the Markov decision process.

Continuous Control Decision Making +3

Projected Off-Policy Q-Learning (POP-QL) for Stabilizing Offline Reinforcement Learning

no code implementations25 Nov 2023 Melrose Roderick, Gaurav Manek, Felix Berkenkamp, J. Zico Kolter

A key problem in off-policy Reinforcement Learning (RL) is the mismatch, or distribution shift, between the dataset and the distribution over states and actions visited by the learned policy.

Q-Learning Reinforcement Learning (RL)

Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization

no code implementations7 Dec 2023 Carlos E. Luis, Alessandro G. Bottero, Julia Vinogradska, Felix Berkenkamp, Jan Peters

Previous work upper bounds the posterior variance over values by solving a so-called uncertainty Bellman equation (UBE), but the over-approximation may result in inefficient exploration.

Model-based Reinforcement Learning Offline RL

Generative Posterior Networks for Approximately Bayesian Epistemic Uncertainty Estimation

no code implementations29 Dec 2023 Melrose Roderick, Felix Berkenkamp, Fatemeh Sheikholeslami, Zico Kolter

In many real-world problems, there is a limited set of training data, but an abundance of unlabeled data.

Information-Theoretic Safe Bayesian Optimization

no code implementations23 Feb 2024 Alessandro G. Bottero, Carlos E. Luis, Julia Vinogradska, Felix Berkenkamp, Jan Peters

In this paper, we propose an information-theoretic safe exploration criterion that directly exploits the GP posterior to identify the most informative safe parameters to evaluate.

Bayesian Optimization Decision Making +1

Cannot find the paper you are looking for? You can Submit a new open access paper.