Search Results for author: Felix Berkenkamp

Found 28 papers, 15 papers with code

Value-Distributional Model-Based Reinforcement Learning

no code implementations12 Aug 2023 Carlos E. Luis, Alessandro G. Bottero, Julia Vinogradska, Felix Berkenkamp, Jan Peters

We study the problem from a model-based Bayesian reinforcement learning perspective, where the goal is to learn the posterior distribution over value functions induced by parameter (epistemic) uncertainty of the Markov decision process.

Continuous Control Decision Making +3

Model-Based Uncertainty in Value Functions

1 code implementation24 Feb 2023 Carlos E. Luis, Alessandro G. Bottero, Julia Vinogradska, Felix Berkenkamp, Jan Peters

We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.

Continuous Control Model-based Reinforcement Learning +2

Information-Theoretic Safe Exploration with Gaussian Processes

1 code implementation9 Dec 2022 Alessandro G. Bottero, Carlos E. Luis, Julia Vinogradska, Felix Berkenkamp, Jan Peters

We consider a sequential decision making task where we are not allowed to evaluate parameters that violate an a priori unknown (safety) constraint.

Decision Making Gaussian Processes +1

On-Policy Model Errors in Reinforcement Learning

no code implementations ICLR 2022 Lukas P. Fröhlich, Maksym Lefarov, Melanie N. Zeilinger, Felix Berkenkamp

In contrast, model-based methods can use the learned model to generate new data, but model errors and bias can render learning unstable or suboptimal.

reinforcement-learning Reinforcement Learning (RL)

Generative Posterior Networks for Approximately Bayesian Epistemic Uncertainty Estimation

no code implementations29 Sep 2021 Melrose Roderick, Felix Berkenkamp, Fatemeh Sheikholeslami, J Zico Kolter

Ensembles of neural networks are often used to estimate epistemic uncertainty in high-dimensional problems because of their scalability and ease of use.

Probabilistic Meta-Learning for Bayesian Optimization

no code implementations1 Jan 2021 Felix Berkenkamp, Anna Eivazi, Lukas Grossberger, Kathrin Skubch, Jonathan Spitz, Christian Daniel, Stefan Falkner

Transfer and meta-learning algorithms leverage evaluations on related tasks in order to significantly speed up learning or optimization on a new problem.

Bayesian Optimization Meta-Learning +1

Efficient Model-Based Reinforcement Learning through Optimistic Policy Search and Planning

1 code implementation NeurIPS 2020 Sebastian Curi, Felix Berkenkamp, Andreas Krause

Based on this theoretical foundation, we show how optimistic exploration can be easily combined with state-of-the-art reinforcement learning algorithms and different probabilistic models.

Model-based Reinforcement Learning reinforcement-learning +1

Structured Variational Inference in Unstable Gaussian Process State Space Models

1 code implementation16 Jul 2019 Silvan Melchior, Sebastian Curi, Felix Berkenkamp, Andreas Krause

Finally, we show experimentally that our learning algorithm performs well in stable and unstable real systems with hidden states.

Gaussian Processes Variational Inference

Learning-based Model Predictive Control for Safe Exploration and Reinforcement Learning

1 code implementation27 Jun 2019 Torsten Koller, Felix Berkenkamp, Matteo Turchetta, Joschka Boedecker, Andreas Krause

We evaluate the resulting algorithm to safely explore the dynamics of an inverted pendulum and to solve a reinforcement learning task on a cart-pole system with safety constraints.

reinforcement-learning Reinforcement Learning (RL) +1

No-Regret Bayesian Optimization with Unknown Hyperparameters

no code implementations10 Jan 2019 Felix Berkenkamp, Angela P. Schoellig, Andreas Krause

In this paper, we present the first BO algorithm that is provably no-regret and converges to the optimum without knowledge of the hyperparameters.

Bayesian Optimization

Learning to Compensate Photovoltaic Power Fluctuations from Images of the Sky by Imitating an Optimal Policy

no code implementations13 Nov 2018 Robin Spiess, Felix Berkenkamp, Jan Poland, Andreas Krause

In this paper, we present a deep learning approach that uses images of the sky to compensate power fluctuations predictively and reduces battery stress.

Imitation Learning

The Lyapunov Neural Network: Adaptive Stability Certification for Safe Learning of Dynamical Systems

1 code implementation2 Aug 2018 Spencer M. Richards, Felix Berkenkamp, Andreas Krause

We demonstrate our method by learning the safe region of attraction for a simulated inverted pendulum.

Learning-based Model Predictive Control for Safe Exploration

1 code implementation22 Mar 2018 Torsten Koller, Felix Berkenkamp, Matteo Turchetta, Andreas Krause

However, these methods typically do not provide any safety guarantees, which prevents their use in safety-critical, real-world applications.

Safe Exploration

Verifying Controllers Against Adversarial Examples with Bayesian Optimization

1 code implementation23 Feb 2018 Shromona Ghosh, Felix Berkenkamp, Gireeja Ranade, Shaz Qadeer, Ashish Kapoor

We specify safety constraints using logic and exploit structure in the problem in order to test the system for adversarial counter examples that violate the safety specifications.

Bayesian Optimization reinforcement-learning +1

Bayesian Optimization with Safety Constraints: Safe and Automatic Parameter Tuning in Robotics

3 code implementations14 Feb 2016 Felix Berkenkamp, Andreas Krause, Angela P. Schoellig

While an initial guess for the parameters may be obtained from dynamic models of the robot, parameters are usually tuned manually on the real system to achieve the best performance.

Bayesian Optimization

Safe Controller Optimization for Quadrotors with Gaussian Processes

3 code implementations3 Sep 2015 Felix Berkenkamp, Angela P. Schoellig, Andreas Krause

One of the most fundamental problems when designing controllers for dynamic systems is the tuning of the controller parameters.


Cannot find the paper you are looking for? You can Submit a new open access paper.