Search Results for author: Runa Eschenhagen

Found 12 papers, 7 papers with code

Can We Remove the Square-Root in Adaptive Gradient Methods? A Second-Order Perspective

no code implementations5 Feb 2024 Wu Lin, Felix Dangel, Runa Eschenhagen, Juhan Bae, Richard E. Turner, Alireza Makhzani

Adaptive gradient optimizers like Adam(W) are the default training algorithms for many deep learning architectures, such as transformers.

Second-order methods

Kronecker-Factored Approximate Curvature for Modern Neural Network Architectures

no code implementations NeurIPS 2023 Runa Eschenhagen, Alexander Immer, Richard E. Turner, Frank Schneider, Philipp Hennig

In this work, we identify two different settings of linear weight-sharing layers which motivate two flavours of K-FAC -- $\textit{expand}$ and $\textit{reduce}$.

Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization

1 code implementation17 Apr 2023 Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Vincent Fortuin

The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.

Bayesian Optimization Decision Making +2

Approximate Bayesian Neural Operators: Uncertainty Quantification for Parametric PDEs

no code implementations2 Aug 2022 Emilia Magnani, Nicholas Krämer, Runa Eschenhagen, Lorenzo Rosasco, Philipp Hennig

Neural operators are a type of deep architecture that learns to solve (i. e. learns the nonlinear solution operator of) partial differential equations (PDEs).

Gaussian Processes Uncertainty Quantification

Posterior Refinement Improves Sample Efficiency in Bayesian Neural Networks

1 code implementation20 May 2022 Agustinus Kristiadi, Runa Eschenhagen, Philipp Hennig

We show that the resulting posterior approximation is competitive with even the gold-standard full-batch Hamiltonian Monte Carlo.

Laplace Redux -- Effortless Bayesian Deep Learning

3 code implementations NeurIPS 2021 Erik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, Philipp Hennig

Bayesian formulations of deep learning have been shown to have compelling theoretical properties and offer practical functional benefits, such as improved predictive uncertainty quantification and model selection.

Misconceptions Model Selection +1

Laplace Redux - Effortless Bayesian Deep Learning

no code implementations NeurIPS 2021 Erik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, Philipp Hennig

Bayesian formulations of deep learning have been shown to have compelling theoretical properties and offer practical functional benefits, such as improved predictive uncertainty quantification and model selection.

Misconceptions Model Selection +1

Continual Deep Learning by Functional Regularisation of Memorable Past

1 code implementation NeurIPS 2020 Pingbo Pan, Siddharth Swaroop, Alexander Immer, Runa Eschenhagen, Richard E. Turner, Mohammad Emtiyaz Khan

Continually learning new skills is important for intelligent systems, yet standard deep learning methods suffer from catastrophic forgetting of the past.

Practical Deep Learning with Bayesian Principles

1 code implementation NeurIPS 2019 Kazuki Osawa, Siddharth Swaroop, Anirudh Jain, Runa Eschenhagen, Richard E. Turner, Rio Yokota, Mohammad Emtiyaz Khan

Importantly, the benefits of Bayesian principles are preserved: predictive probabilities are well-calibrated, uncertainties on out-of-distribution data are improved, and continual-learning performance is boosted.

Continual Learning Data Augmentation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.