Search Results for author: James Urquhart Allingham

Found 8 papers, 4 papers with code

A Generative Model of Symmetry Transformations

no code implementations4 Mar 2024 James Urquhart Allingham, Bruno Kacper Mlodozeniec, Shreyas Padhy, Javier Antorán, David Krueger, Richard E. Turner, Eric Nalisnick, José Miguel Hernández-Lobato

Correctly capturing the symmetry transformations of data can lead to efficient models with strong generalization capabilities, though methods incorporating symmetries often require prior knowledge.

A Simple Zero-shot Prompt Weighting Technique to Improve Prompt Ensembling in Text-Image Models

no code implementations13 Feb 2023 James Urquhart Allingham, Jie Ren, Michael W Dusenberry, Xiuye Gu, Yin Cui, Dustin Tran, Jeremiah Zhe Liu, Balaji Lakshminarayanan

In particular, we ask "Given a large pool of prompts, can we automatically score the prompts and ensemble those that are most suitable for a particular downstream dataset, without needing access to labeled validation data?".

Prompt Engineering Zero-Shot Learning

Adapting the Linearised Laplace Model Evidence for Modern Deep Learning

no code implementations17 Jun 2022 Javier Antorán, David Janz, James Urquhart Allingham, Erik Daxberger, Riccardo Barbano, Eric Nalisnick, José Miguel Hernández-Lobato

The linearised Laplace method for estimating model uncertainty has received renewed attention in the Bayesian deep learning community.

Model Selection

Linearised Laplace Inference in Networks with Normalisation Layers and the Neural g-Prior

no code implementations pproximateinference AABI Symposium 2022 Javier Antoran, James Urquhart Allingham, David Janz, Erik Daxberger, Eric Nalisnick, José Miguel Hernández-Lobato

We show that for neural networks (NN) with normalisation layers, i. e. batch norm, layer norm, or group norm, the Laplace model evidence does not approximate the volume of a posterior mode and is thus unsuitable for model selection.

Image Classification Model Selection +1

Sparse MoEs meet Efficient Ensembles

1 code implementation7 Oct 2021 James Urquhart Allingham, Florian Wenzel, Zelda E Mariet, Basil Mustafa, Joan Puigcerver, Neil Houlsby, Ghassen Jerfel, Vincent Fortuin, Balaji Lakshminarayanan, Jasper Snoek, Dustin Tran, Carlos Riquelme Ruiz, Rodolphe Jenatton

Machine learning models based on the aggregated outputs of submodels, either at the activation or prediction levels, often exhibit strong performance compared to individual models.

Few-Shot Learning

Bayesian Deep Learning via Subnetwork Inference

1 code implementation28 Oct 2020 Erik Daxberger, Eric Nalisnick, James Urquhart Allingham, Javier Antorán, José Miguel Hernández-Lobato

In particular, we implement subnetwork linearized Laplace as a simple, scalable Bayesian deep learning method: We first obtain a MAP estimate of all weights and then infer a full-covariance Gaussian posterior over a subnetwork using the linearized Laplace approximation.

Bayesian Inference

Depth Uncertainty in Neural Networks

1 code implementation NeurIPS 2020 Javier Antorán, James Urquhart Allingham, José Miguel Hernández-Lobato

Existing methods for estimating uncertainty in deep learning tend to require multiple forward passes, making them unsuitable for applications where computational resources are limited.

Image Classification regression

Variational Depth Search in ResNets

1 code implementation6 Feb 2020 Javier Antorán, James Urquhart Allingham, José Miguel Hernández-Lobato

One-shot neural architecture search allows joint learning of weights and network architecture, reducing computational cost.

Neural Architecture Search

Cannot find the paper you are looking for? You can Submit a new open access paper.