Search Results for author: Runa Eschenhagen

Found 8 papers, 4 papers with code

Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization

no code implementations17 Apr 2023 Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Vincent Fortuin

The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.

Bayesian Optimization Decision Making +2

Approximate Bayesian Neural Operators: Uncertainty Quantification for Parametric PDEs

no code implementations2 Aug 2022 Emilia Magnani, Nicholas Krämer, Runa Eschenhagen, Lorenzo Rosasco, Philipp Hennig

Neural operators are a type of deep architecture that learns to solve (i. e. learns the nonlinear solution operator of) partial differential equations (PDEs).

Gaussian Processes

Posterior Refinement Improves Sample Efficiency in Bayesian Neural Networks

1 code implementation20 May 2022 Agustinus Kristiadi, Runa Eschenhagen, Philipp Hennig

We show that the resulting posterior approximation is competitive with even the gold-standard full-batch Hamiltonian Monte Carlo.

Laplace Redux -- Effortless Bayesian Deep Learning

2 code implementations NeurIPS 2021 Erik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, Philipp Hennig

Bayesian formulations of deep learning have been shown to have compelling theoretical properties and offer practical functional benefits, such as improved predictive uncertainty quantification and model selection.

Misconceptions Model Selection

Laplace Redux - Effortless Bayesian Deep Learning

no code implementations NeurIPS 2021 Erik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, Philipp Hennig

Bayesian formulations of deep learning have been shown to have compelling theoretical properties and offer practical functional benefits, such as improved predictive uncertainty quantification and model selection.

Misconceptions Model Selection

Continual Deep Learning by Functional Regularisation of Memorable Past

1 code implementation NeurIPS 2020 Pingbo Pan, Siddharth Swaroop, Alexander Immer, Runa Eschenhagen, Richard E. Turner, Mohammad Emtiyaz Khan

Continually learning new skills is important for intelligent systems, yet standard deep learning methods suffer from catastrophic forgetting of the past.

Practical Deep Learning with Bayesian Principles

1 code implementation NeurIPS 2019 Kazuki Osawa, Siddharth Swaroop, Anirudh Jain, Runa Eschenhagen, Richard E. Turner, Rio Yokota, Mohammad Emtiyaz Khan

Importantly, the benefits of Bayesian principles are preserved: predictive probabilities are well-calibrated, uncertainties on out-of-distribution data are improved, and continual-learning performance is boosted.

Continual Learning Data Augmentation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.