1 code implementation • 4 Nov 2022 • Dionysis Manousakas, Hippolyt Ritter, Theofanis Karaletsos
Recent advances in coreset methods have shown that a selection of representative datapoints can replace massive volumes of data for Bayesian inference, preserving the relevant statistical information and significantly accelerating subsequent downstream tasks.
1 code implementation • 1 Oct 2021 • Hippolyt Ritter, Theofanis Karaletsos
We introduce TyXe, a Bayesian neural network library built on top of Pytorch and Pyro.
no code implementations • NeurIPS 2021 • Hippolyt Ritter, Martin Kukla, Cheng Zhang, Yingzhen Li
Bayesian neural networks and deep ensembles represent two modern paradigms of uncertainty quantification in deep learning.
no code implementations • 28 Sep 2020 • Pauching Yap, Hippolyt Ritter, David Barber
This work introduces a Bayesian online meta-learning framework to tackle the catastrophic forgetting and the sequential few-shot tasks problems.
1 code implementation • 30 Apr 2020 • Pauching Yap, Hippolyt Ritter, David Barber
We demonstrate that the popular gradient-based model-agnostic meta-learning algorithm (MAML) indeed suffers from catastrophic forgetting and introduce a Bayesian online meta-learning framework that tackles this problem.
no code implementations • 12 Feb 2019 • Julius Kunze, Louis Kirsch, Hippolyt Ritter, David Barber
Variational inference with a factorized Gaussian posterior estimate is a widely used approach for learning parameters and hidden variables.
no code implementations • 27 Sep 2018 • Julius Kunze, Louis Kirsch, Hippolyt Ritter, David Barber
We propose Noisy Information Bottlenecks (NIB) to limit mutual information between learned parameters and the data through noise.
no code implementations • NeurIPS 2018 • Hippolyt Ritter, Aleksandar Botev, David Barber
In order to make our method scalable, we leverage recent block-diagonal Kronecker factored approximations to the curvature.
1 code implementation • ICLR 2018 • Hippolyt Ritter, Aleksandar Botev, David Barber
Pytorch implementations of Bayes By Backprop, MC Dropout, SGLD, the Local Reparametrization Trick, KF-Laplace and more
no code implementations • ICML 2017 • Aleksandar Botev, Hippolyt Ritter, David Barber
We present an efficient block-diagonal ap- proximation to the Gauss-Newton matrix for feedforward neural networks.