Search Results for author: Felix Berkenkamp

Found 24 papers, 13 papers with code

On-Policy Model Errors in Reinforcement Learning

no code implementations ICLR 2022 Lukas P. Fröhlich, Maksym Lefarov, Melanie N. Zeilinger, Felix Berkenkamp

In contrast, model-based methods can use the learned model to generate new data, but model errors and bias can render learning unstable or suboptimal.

reinforcement-learning

Generative Posterior Networks for Approximately Bayesian Epistemic Uncertainty Estimation

no code implementations29 Sep 2021 Melrose Roderick, Felix Berkenkamp, Fatemeh Sheikholeslami, J Zico Kolter

Ensembles of neural networks are often used to estimate epistemic uncertainty in high-dimensional problems because of their scalability and ease of use.

Probabilistic Meta-Learning for Bayesian Optimization

no code implementations1 Jan 2021 Felix Berkenkamp, Anna Eivazi, Lukas Grossberger, Kathrin Skubch, Jonathan Spitz, Christian Daniel, Stefan Falkner

Transfer and meta-learning algorithms leverage evaluations on related tasks in order to significantly speed up learning or optimization on a new problem.

Meta-Learning Transfer Learning

Efficient Model-Based Reinforcement Learning through Optimistic Policy Search and Planning

1 code implementation NeurIPS 2020 Sebastian Curi, Felix Berkenkamp, Andreas Krause

Based on this theoretical foundation, we show how optimistic exploration can be easily combined with state-of-the-art reinforcement learning algorithms and different probabilistic models.

Model-based Reinforcement Learning reinforcement-learning

Structured Variational Inference in Unstable Gaussian Process State Space Models

1 code implementation16 Jul 2019 Silvan Melchior, Sebastian Curi, Felix Berkenkamp, Andreas Krause

Finally, we show experimentally that our learning algorithm performs well in stable and unstable real systems with hidden states.

Gaussian Processes Variational Inference

Learning-based Model Predictive Control for Safe Exploration and Reinforcement Learning

1 code implementation27 Jun 2019 Torsten Koller, Felix Berkenkamp, Matteo Turchetta, Joschka Boedecker, Andreas Krause

We evaluate the resulting algorithm to safely explore the dynamics of an inverted pendulum and to solve a reinforcement learning task on a cart-pole system with safety constraints.

reinforcement-learning Safe Exploration

No-Regret Bayesian Optimization with Unknown Hyperparameters

no code implementations10 Jan 2019 Felix Berkenkamp, Angela P. Schoellig, Andreas Krause

In this paper, we present the first BO algorithm that is provably no-regret and converges to the optimum without knowledge of the hyperparameters.

Learning to Compensate Photovoltaic Power Fluctuations from Images of the Sky by Imitating an Optimal Policy

no code implementations13 Nov 2018 Robin Spiess, Felix Berkenkamp, Jan Poland, Andreas Krause

In this paper, we present a deep learning approach that uses images of the sky to compensate power fluctuations predictively and reduces battery stress.

Imitation Learning

The Lyapunov Neural Network: Adaptive Stability Certification for Safe Learning of Dynamical Systems

1 code implementation2 Aug 2018 Spencer M. Richards, Felix Berkenkamp, Andreas Krause

We demonstrate our method by learning the safe region of attraction for a simulated inverted pendulum.

Learning-based Model Predictive Control for Safe Exploration

1 code implementation22 Mar 2018 Torsten Koller, Felix Berkenkamp, Matteo Turchetta, Andreas Krause

However, these methods typically do not provide any safety guarantees, which prevents their use in safety-critical, real-world applications.

Safe Exploration

Verifying Controllers Against Adversarial Examples with Bayesian Optimization

1 code implementation23 Feb 2018 Shromona Ghosh, Felix Berkenkamp, Gireeja Ranade, Shaz Qadeer, Ashish Kapoor

We specify safety constraints using logic and exploit structure in the problem in order to test the system for adversarial counter examples that violate the safety specifications.

reinforcement-learning

Bayesian Optimization with Safety Constraints: Safe and Automatic Parameter Tuning in Robotics

3 code implementations14 Feb 2016 Felix Berkenkamp, Andreas Krause, Angela P. Schoellig

While an initial guess for the parameters may be obtained from dynamic models of the robot, parameters are usually tuned manually on the real system to achieve the best performance.

Safe Controller Optimization for Quadrotors with Gaussian Processes

3 code implementations3 Sep 2015 Felix Berkenkamp, Angela P. Schoellig, Andreas Krause

One of the most fundamental problems when designing controllers for dynamic systems is the tuning of the controller parameters.

Robotics

Cannot find the paper you are looking for? You can Submit a new open access paper.