Search Results for author: Matthew Ashman

Found 14 papers, 8 papers with code

Tighter sparse variational Gaussian processes

no code implementations7 Feb 2025 Thang D. Bui, Matthew Ashman, Richard E. Turner

Extensive experiments on regression benchmarks, classification, and latent variable models demonstrate that the proposed approximation consistently matches or outperforms standard sparse variational GPs while maintaining the same computational cost.

Computational Efficiency Gaussian Processes +1

A Meta-Learning Approach to Bayesian Causal Discovery

no code implementations21 Dec 2024 Anish Dhir, Matthew Ashman, James Requeima, Mark van der Wilk

To address these limitations, we propose a Bayesian meta learning model that allows for sampling causal structures from the posterior and encodes these key properties.

Causal Discovery Meta-Learning

In-Context In-Context Learning with Transformer Neural Processes

no code implementations19 Jun 2024 Matthew Ashman, Cristiana Diaconu, Adrian Weller, Richard E. Turner

Standard NP architectures, such as the convolutional conditional NP (ConvCNP) or the family of transformer neural processes (TNPs), are not capable of in-context in-context learning, as they are only able to condition on a single dataset.

In-Context Learning Meta-Learning

Approximately Equivariant Neural Processes

1 code implementation19 Jun 2024 Matthew Ashman, Cristiana Diaconu, Adrian Weller, Wessel Bruinsma, Richard E. Turner

Our approach is agnostic to both the choice of symmetry group and model architecture, making it widely applicable.

Meta-Learning

Translation Equivariant Transformer Neural Processes

1 code implementation18 Jun 2024 Matthew Ashman, Cristiana Diaconu, Junhyuck Kim, Lakee Sivaraya, Stratis Markou, James Requeima, Wessel P. Bruinsma, Richard E. Turner

Notably, the posterior prediction maps for data that are stationary -- a common assumption in spatio-temporal modelling -- exhibit translation equivariance.

Translation

Amortised Inference in Neural Networks for Small-Scale Probabilistic Meta-Learning

no code implementations24 Oct 2023 Matthew Ashman, Tommy Rochussen, Adrian Weller

The global inducing point variational approximation for BNNs is based on using a set of inducing inputs to construct a series of conditional distributions that accurately approximate the conditionals of the true posterior distribution.

Bayesian Inference Meta-Learning

Causal Reasoning in the Presence of Latent Confounders via Neural ADMG Learning

1 code implementation22 Mar 2023 Matthew Ashman, Chao Ma, Agrin Hilmkil, Joel Jennings, Cheng Zhang

In this work, we further extend the existing body of work and develop a novel gradient-based approach to learning an ADMG with non-linear functional relations from observational data.

Differentially private partitioned variational inference

1 code implementation23 Sep 2022 Mikko A. Heikkilä, Matthew Ashman, Siddharth Swaroop, Richard E. Turner, Antti Honkela

In this paper, we present differentially private partitioned variational inference, the first general framework for learning a variational approximation to a Bayesian posterior distribution in the federated learning setting while minimising the number of communication rounds and providing differential privacy guarantees for data subjects.

Federated Learning Privacy Preserving +1

Do Concept Bottleneck Models Learn as Intended?

1 code implementation10 May 2021 Andrei Margeloiu, Matthew Ashman, Umang Bhatt, Yanzhi Chen, Mateja Jamnik, Adrian Weller

Concept bottleneck models map from raw inputs to concepts, and then from concepts to targets.

Scalable Gaussian Process Variational Autoencoders

1 code implementation26 Oct 2020 Metod Jazbec, Matthew Ashman, Vincent Fortuin, Michael Pearce, Stephan Mandt, Gunnar Rätsch

Conventional variational autoencoders fail in modeling correlations between data points due to their use of factorized priors.

Cannot find the paper you are looking for? You can Submit a new open access paper.