no code implementations • 7 Feb 2025 • Thang D. Bui, Matthew Ashman, Richard E. Turner
Extensive experiments on regression benchmarks, classification, and latent variable models demonstrate that the proposed approximation consistently matches or outperforms standard sparse variational GPs while maintaining the same computational cost.
no code implementations • 21 Dec 2024 • Anish Dhir, Matthew Ashman, James Requeima, Mark van der Wilk
To address these limitations, we propose a Bayesian meta learning model that allows for sampling causal structures from the posterior and encodes these key properties.
no code implementations • 9 Oct 2024 • Matthew Ashman, Cristiana Diaconu, Eric Langezaal, Adrian Weller, Richard E. Turner
Recently, transformer-based approaches have shown great promise in a range of weather forecasting problems.
no code implementations • 19 Jun 2024 • Matthew Ashman, Cristiana Diaconu, Adrian Weller, Richard E. Turner
Standard NP architectures, such as the convolutional conditional NP (ConvCNP) or the family of transformer neural processes (TNPs), are not capable of in-context in-context learning, as they are only able to condition on a single dataset.
1 code implementation • 19 Jun 2024 • Matthew Ashman, Cristiana Diaconu, Adrian Weller, Wessel Bruinsma, Richard E. Turner
Our approach is agnostic to both the choice of symmetry group and model architecture, making it widely applicable.
1 code implementation • 18 Jun 2024 • Matthew Ashman, Cristiana Diaconu, Junhyuck Kim, Lakee Sivaraya, Stratis Markou, James Requeima, Wessel P. Bruinsma, Richard E. Turner
Notably, the posterior prediction maps for data that are stationary -- a common assumption in spatio-temporal modelling -- exhibit translation equivariance.
no code implementations • 12 Jun 2024 • Ossi Räisä, Stratis Markou, Matthew Ashman, Wessel P. Bruinsma, Marlon Tobaben, Antti Honkela, Richard E. Turner
One approach to mitigating this issue is pre-training models on simulated data before DP learning on the private data.
no code implementations • 24 Oct 2023 • Matthew Ashman, Tommy Rochussen, Adrian Weller
The global inducing point variational approximation for BNNs is based on using a set of inducing inputs to construct a series of conditional distributions that accurately approximate the conditionals of the true posterior distribution.
1 code implementation • 22 Mar 2023 • Matthew Ashman, Chao Ma, Agrin Hilmkil, Joel Jennings, Cheng Zhang
In this work, we further extend the existing body of work and develop a novel gradient-based approach to learning an ADMG with non-linear functional relations from observational data.
1 code implementation • 23 Sep 2022 • Mikko A. Heikkilä, Matthew Ashman, Siddharth Swaroop, Richard E. Turner, Antti Honkela
In this paper, we present differentially private partitioned variational inference, the first general framework for learning a variational approximation to a Bayesian posterior distribution in the federated learning setting while minimising the number of communication rounds and providing differential privacy guarantees for data subjects.
1 code implementation • 24 Feb 2022 • Matthew Ashman, Thang D. Bui, Cuong V. Nguyen, Stratis Markou, Adrian Weller, Siddharth Swaroop, Richard E. Turner
Variational inference (VI) has become the method of choice for fitting many modern probabilistic models.
1 code implementation • 10 May 2021 • Andrei Margeloiu, Matthew Ashman, Umang Bhatt, Yanzhi Chen, Mateja Jamnik, Adrian Weller
Concept bottleneck models map from raw inputs to concepts, and then from concepts to targets.
1 code implementation • 26 Oct 2020 • Metod Jazbec, Matthew Ashman, Vincent Fortuin, Michael Pearce, Stephan Mandt, Gunnar Rätsch
Conventional variational autoencoders fail in modeling correlations between data points due to their use of factorized priors.
1 code implementation • 20 Oct 2020 • Matthew Ashman, Jonathan So, Will Tebbutt, Vincent Fortuin, Michael Pearce, Richard E. Turner
Large, multi-dimensional spatio-temporal datasets are omnipresent in modern science and engineering.