Search Results for author: Jonas Rothfuss

Found 23 papers, 13 papers with code

Data-Efficient Task Generalization via Probabilistic Model-based Meta Reinforcement Learning

no code implementations13 Nov 2023 Arjun Bhardwaj, Jonas Rothfuss, Bhavya Sukhija, Yarden As, Marco Hutter, Stelian Coros, Andreas Krause

We introduce PACOH-RL, a novel model-based Meta-Reinforcement Learning (Meta-RL) algorithm designed to efficiently adapt control policies to changing dynamics.

Meta-Learning Meta Reinforcement Learning +2

Hallucinated Adversarial Control for Conservative Offline Policy Evaluation

1 code implementation2 Mar 2023 Jonas Rothfuss, Bhavya Sukhija, Tobias Birchler, Parnian Kassraie, Andreas Krause

We study the problem of conservative off-policy evaluation (COPE) where given an offline dataset of environment interactions, collected by other agents, we seek to obtain a (tight) lower bound on a policy's performance.

Continuous Control Off-policy evaluation +1

Scalable PAC-Bayesian Meta-Learning via the PAC-Optimal Hyper-Posterior: From Theory to Practice

no code implementations14 Nov 2022 Jonas Rothfuss, Martin Josifoski, Vincent Fortuin, Andreas Krause

Meta-Learning aims to speed up the learning process on new tasks by acquiring useful inductive biases from datasets of related learning tasks.

Gaussian Processes Meta-Learning +1

Instance-Dependent Generalization Bounds via Optimal Transport

no code implementations2 Nov 2022 Songyan Hou, Parnian Kassraie, Anastasis Kratsios, Andreas Krause, Jonas Rothfuss

Existing generalization bounds fail to explain crucial factors that drive the generalization of modern neural networks.

Generalization Bounds Inductive Bias

Lifelong Bandit Optimization: No Prior and No Regret

no code implementations27 Oct 2022 Felix Schur, Parnian Kassraie, Jonas Rothfuss, Andreas Krause

Our algorithm can be paired with any kernelized or linear bandit algorithm and guarantees oracle optimal performance, meaning that as more tasks are solved, the regret of LIBO on each task converges to the regret of the bandit algorithm with oracle knowledge of the true kernel.

MARS: Meta-Learning as Score Matching in the Function Space

1 code implementation24 Oct 2022 Krunoslav Lehman Pavasovic, Jonas Rothfuss, Andreas Krause

To circumvent these issues, we approach meta-learning through the lens of functional Bayesian neural network inference, which views the prior as a stochastic process and performs inference in the function space.

Meta-Learning

Meta-Learning Priors for Safe Bayesian Optimization

no code implementations3 Oct 2022 Jonas Rothfuss, Christopher Koenig, Alisa Rupenyan, Andreas Krause

In the presence of unknown safety constraints, it is crucial to choose reliable model hyper-parameters to avoid safety violations.

Bayesian Optimization Meta-Learning +1

Amortized Inference for Causal Structure Learning

1 code implementation25 May 2022 Lars Lorch, Scott Sussex, Jonas Rothfuss, Andreas Krause, Bernhard Schölkopf

Rather than searching over structures, we train a variational inference model to directly predict the causal structure from observational or interventional data.

Causal Discovery Inductive Bias +1

Meta-Learning Hypothesis Spaces for Sequential Decision-making

no code implementations1 Feb 2022 Parnian Kassraie, Jonas Rothfuss, Andreas Krause

We demonstrate our approach on the kernelized bandit problem (a. k. a.~Bayesian optimization), where we establish regret bounds competitive with those given the true kernel.

Bayesian Optimization Decision Making +3

Variational Causal Networks: Approximate Bayesian Inference over Causal Structures

1 code implementation14 Jun 2021 Yashas Annadani, Jonas Rothfuss, Alexandre Lacoste, Nino Scherrer, Anirudh Goyal, Yoshua Bengio, Stefan Bauer

However, a crucial aspect to acting intelligently upon the knowledge about causal structure which has been inferred from finite data demands reasoning about its uncertainty.

Bayesian Inference Causal Inference +2

Meta-Learning Reliable Priors in the Function Space

no code implementations NeurIPS 2021 Jonas Rothfuss, Dominique Heyn, Jinfan Chen, Andreas Krause

When data are scarce meta-learning can improve a learner's accuracy by harnessing previous experience from related learning tasks.

Bayesian Optimization Decision Making +2

DiBS: Differentiable Bayesian Structure Learning

2 code implementations NeurIPS 2021 Lars Lorch, Jonas Rothfuss, Bernhard Schölkopf, Andreas Krause

In this work, we propose a general, fully differentiable framework for Bayesian structure learning (DiBS) that operates in the continuous space of a latent probabilistic graph representation.

Causal Discovery Variational Inference

Robustness to Pruning Predicts Generalization in Deep Neural Networks

no code implementations10 Mar 2021 Lorenz Kuhn, Clare Lyle, Aidan N. Gomez, Jonas Rothfuss, Yarin Gal

Existing generalization measures that aim to capture a model's simplicity based on parameter counts or norms fail to explain generalization in overparameterized deep neural networks.

Meta-Learning Bayesian Neural Network Priors Based on PAC-Bayesian Theory

no code implementations1 Jan 2021 Jonas Rothfuss, Martin Josifoski, Andreas Krause

Bayesian deep learning is a promising approach towards improved uncertainty quantification and sample efficiency.

Meta-Learning Uncertainty Quantification +1

Noise Regularization for Conditional Density Estimation

1 code implementation21 Jul 2019 Jonas Rothfuss, Fabio Ferreira, Simon Boehm, Simon Walther, Maxim Ulrich, Tamim Asfour, Andreas Krause

To address this issue, we develop a model-agnostic noise regularization method for CDE that adds random perturbations to the data during training.

Density Estimation

Conditional Density Estimation with Neural Networks: Best Practices and Benchmarks

1 code implementation3 Mar 2019 Jonas Rothfuss, Fabio Ferreira, Simon Walther, Maxim Ulrich

Given a set of empirical observations, conditional density estimation aims to capture the statistical relationship between a conditional variable $\mathbf{x}$ and a dependent variable $\mathbf{y}$ by modeling their conditional probability $p(\mathbf{y}|\mathbf{x})$.

Density Estimation

Model-Based Reinforcement Learning via Meta-Policy Optimization

1 code implementation14 Sep 2018 Ignasi Clavera, Jonas Rothfuss, John Schulman, Yasuhiro Fujita, Tamim Asfour, Pieter Abbeel

Finally, we demonstrate that our approach is able to match the asymptotic performance of model-free methods while requiring significantly less experience.

Model-based Reinforcement Learning reinforcement-learning +1

Introducing the Simulated Flying Shapes and Simulated Planar Manipulator Datasets

2 code implementations2 Jul 2018 Fabio Ferreira, Jonas Rothfuss, Eren Erdal Aksoy, You Zhou, Tamim Asfour

We release two artificial datasets, Simulated Flying Shapes and Simulated Planar Manipulator that allow to test the learning ability of video processing systems.

Deep Episodic Memory: Encoding, Recalling, and Predicting Episodic Experiences for Robot Action Execution

1 code implementation12 Jan 2018 Jonas Rothfuss, Fabio Ferreira, Eren Erdal Aksoy, You Zhou, Tamim Asfour

We present a novel deep neural network architecture for representing robot experiences in an episodic-like memory which facilitates encoding, recalling, and predicting action experiences.

Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.