no code implementations • 24 Oct 2023 • Matthew Ashman, Tommy Rochussen, Adrian Weller
The global inducing point variational approximation for BNNs is based on using a set of inducing inputs to construct a series of conditional distributions that accurately approximate the conditionals of the true posterior distribution.
1 code implementation • 22 Mar 2023 • Matthew Ashman, Chao Ma, Agrin Hilmkil, Joel Jennings, Cheng Zhang
In this work, we further extend the existing body of work and develop a novel gradient-based approach to learning an ADMG with non-linear functional relations from observational data.
1 code implementation • 23 Sep 2022 • Mikko A. Heikkilä, Matthew Ashman, Siddharth Swaroop, Richard E. Turner, Antti Honkela
In this paper, we present differentially private partitioned variational inference, the first general framework for learning a variational approximation to a Bayesian posterior distribution in the federated learning setting while minimising the number of communication rounds and providing differential privacy guarantees for data subjects.
1 code implementation • 24 Feb 2022 • Matthew Ashman, Thang D. Bui, Cuong V. Nguyen, Stratis Markou, Adrian Weller, Siddharth Swaroop, Richard E. Turner
Variational inference (VI) has become the method of choice for fitting many modern probabilistic models.
no code implementations • 10 May 2021 • Andrei Margeloiu, Matthew Ashman, Umang Bhatt, Yanzhi Chen, Mateja Jamnik, Adrian Weller
Concept bottleneck models map from raw inputs to concepts, and then from concepts to targets.
1 code implementation • 26 Oct 2020 • Metod Jazbec, Matthew Ashman, Vincent Fortuin, Michael Pearce, Stephan Mandt, Gunnar Rätsch
Conventional variational autoencoders fail in modeling correlations between data points due to their use of factorized priors.
1 code implementation • 20 Oct 2020 • Matthew Ashman, Jonathan So, Will Tebbutt, Vincent Fortuin, Michael Pearce, Richard E. Turner
Large, multi-dimensional spatio-temporal datasets are omnipresent in modern science and engineering.