Search Results for author: Lukas P. Fröhlich

Found 5 papers, 2 papers with code

On-Policy Model Errors in Reinforcement Learning

no code implementations ICLR 2022 Lukas P. Fröhlich, Maksym Lefarov, Melanie N. Zeilinger, Felix Berkenkamp

In contrast, model-based methods can use the learned model to generate new data, but model errors and bias can render learning unstable or suboptimal.

reinforcement-learning Reinforcement Learning (RL)

Noisy-Input Entropy Search for Efficient Robust Bayesian Optimization

1 code implementation7 Feb 2020 Lukas P. Fröhlich, Edgar D. Klenske, Julia Vinogradska, Christian Daniel, Melanie N. Zeilinger

We consider the problem of robust optimization within the well-established Bayesian optimization (BO) framework.

Bayesian Optimization

Bayesian Optimization for Policy Search in High-Dimensional Systems via Automatic Domain Selection

no code implementations21 Jan 2020 Lukas P. Fröhlich, Edgar D. Klenske, Christian G. Daniel, Melanie N. Zeilinger

Bayesian Optimization (BO) is an effective method for optimizing expensive-to-evaluate black-box functions with a wide range of applications for example in robotics, system design and parameter optimization.

Bayesian Optimization

On Simulation and Trajectory Prediction with Gaussian Process Dynamics

no code implementations L4DC 2020 Lukas Hewing, Elena Arcari, Lukas P. Fröhlich, Melanie N. Zeilinger

Second, we propose a linearization-based technique that directly provides approximations of the trajectory distribution, taking correlations explicitly into account.

Trajectory Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.