Search Results for author: David Barber

Found 43 papers, 10 papers with code

Spread Divergence

no code implementations ICML 2020 Mingtian Zhang, Peter Hayes, Thomas Bird, Raza Habib, David Barber

For distributions $p$ and $q$ with different supports, the divergence $\div{p}{q}$ may not exist.

Generalization Gap in Amortized Inference

no code implementations23 May 2022 Mingtian Zhang, Peter Hayes, David Barber

The ability of likelihood-based probabilistic models to generalize to unseen data is central to many machine learning applications such as lossless compression.

Survival Analysis for Idiopathic Pulmonary Fibrosis using CT Images and Incomplete Clinical Data

1 code implementation21 Mar 2022 Ahmed H. Shahin, Joseph Jacob, Daniel C. Alexander, David Barber

To this end, we propose a probabilistic model that captures the dependencies between the observed clinical variables and imputes missing ones.

Imputation Survival Analysis

Parallel Neural Local Lossless Compression

2 code implementations13 Jan 2022 Mingtian Zhang, James Townsend, Ning Kang, David Barber

The recently proposed Neural Local Lossless Compression (NeLLoC), which is based on a local autoregressive model, has achieved state-of-the-art (SOTA) out-of-distribution (OOD) generalization performance in the image compression task.

Image Compression

Adaptive Optimization with Examplewise Gradients

1 code implementation30 Nov 2021 Julius Kunze, James Townsend, David Barber

We propose a new, more general approach to the design of stochastic gradient-based optimization methods for machine learning.

Sample Efficient Model Evaluation

no code implementations24 Sep 2021 Emine Yilmaz, Peter Hayes, Raza Habib, Jordan Burgess, David Barber

Labelling data is a major practical bottleneck in training and testing classifiers.

Locally-Contextual Nonlinear CRFs for Sequence Labeling

no code implementations30 Mar 2021 Harshil Shah, Tim Xiao, David Barber

Linear chain conditional random fields (CRFs) combined with contextual word embeddings have achieved state of the art performance on sequence labeling tasks.

Chunking Named Entity Recognition +1

Solipsistic Reinforcement Learning

no code implementations ICLR Workshop SSL-RL 2021 Mingtian Zhang, Peter Noel Hayes, Tim Z. Xiao, Andi Zhang, David Barber

We introduce a new model-based reinforcement learning framework that aims to tackle environments with high dimensional state spaces.

Model-based Reinforcement Learning reinforcement-learning

{Learning disentangled representations with the Wasserstein Autoencoder

no code implementations1 Jan 2021 Benoit Gaujac, Ilya Feige, David Barber

We further study the trade off between disentanglement and reconstruction on more-difficult data sets with unknown generative factors, where we expect improved reconstructions due to the flexibility of the WAE paradigm.

Disentanglement

Efficiently labelling sequences using semi-supervised active learning

no code implementations1 Jan 2021 Harshil Shah, David Barber

However, active learning methods usually use supervised training and ignore the data points which have not yet been labelled.

Active Learning

Reducing the Computational Cost of Deep Generative Models with Binary Neural Networks

no code implementations ICLR 2021 Thomas Bird, Friso H. Kingma, David Barber

In this work we show, for the first time, that we can successfully train generative models which utilize binary neural networks.

Representation Learning for High-Dimensional Data Collection under Local Differential Privacy

no code implementations23 Oct 2020 Alex Mansbridge, Gregory Barbour, Davide Piras, Michael Murray, Christopher Frye, Ilya Feige, David Barber

In this work, our contributions are two-fold: first, by adapting state-of-the-art techniques from representation learning, we introduce a novel approach to learning LDP mechanisms.

Denoising Representation Learning

Learning Deep-Latent Hierarchies by Stacking Wasserstein Autoencoders

no code implementations7 Oct 2020 Benoit Gaujac, Ilya Feige, David Barber

Probabilistic models with hierarchical-latent-variable structures provide state-of-the-art results amongst non-autoregressive, unsupervised density-based models.

Learning disentangled representations with the Wasserstein Autoencoder

no code implementations7 Oct 2020 Benoit Gaujac, Ilya Feige, David Barber

We further study the trade off between disentanglement and reconstruction on more-difficult data sets with unknown generative factors, where the flexibility of the WAE paradigm in the reconstruction term improves reconstructions.

Disentanglement

Bayesian Online Meta-Learning

no code implementations28 Sep 2020 Pauching Yap, Hippolyt Ritter, David Barber

This work introduces a Bayesian online meta-learning framework to tackle the catastrophic forgetting and the sequential few-shot tasks problems.

Classification Meta-Learning +2

Addressing Catastrophic Forgetting in Few-Shot Problems

1 code implementation30 Apr 2020 Pauching Yap, Hippolyt Ritter, David Barber

We demonstrate that the popular gradient-based model-agnostic meta-learning algorithm (MAML) indeed suffers from catastrophic forgetting and introduce a Bayesian online meta-learning framework that tackles this problem.

Classification General Classification +3

Private Machine Learning via Randomised Response

no code implementations14 Jan 2020 David Barber

We introduce a general learning framework for private machine learning based on randomised response.

HiLLoC: Lossless Image Compression with Hierarchical Latent Variable Models

1 code implementation ICLR 2020 James Townsend, Thomas Bird, Julius Kunze, David Barber

We make the following striking observation: fully convolutional VAE models trained on 32x32 ImageNet can generalize well, not just to 64x64 but also to far larger photographs, with no changes to the model.

Image Compression

SPREAD DIVERGENCE

no code implementations25 Sep 2019 Mingtian Zhang, David Barber, Thomas Bird, Peter Hayes, Raza Habib

For distributions $p$ and $q$ with different supports, the divergence $\div{p}{q}$ may not exist.

Variational f-divergence Minimization

no code implementations27 Jul 2019 Mingtian Zhang, Thomas Bird, Raza Habib, Tianlin Xu, David Barber

Probabilistic models are often trained by maximum likelihood, which corresponds to minimizing a specific f-divergence between the model and data distribution.

Image Generation

Auxiliary Variational MCMC

1 code implementation ICLR 2019 Raza Habib, David Barber

We introduce Auxiliary Variational MCMC, a novel framework for learning MCMC kernels that combines recent advances in variational inference with insights drawn from traditional auxiliary variable MCMC methods such as Hamiltonian Monte Carlo.

Variational Inference

Gaussian Mean Field Regularizes by Limiting Learned Information

no code implementations12 Feb 2019 Julius Kunze, Louis Kirsch, Hippolyt Ritter, David Barber

Variational inference with a factorized Gaussian posterior estimate is a widely used approach for learning parameters and hidden variables.

Variational Inference

Spread Divergences

no code implementations21 Nov 2018 Mingtian Zhang, Peter Hayes, Tom Bird, Raza Habib, David Barber

For distributions p and q with different supports, the divergence D(p|q) may not exist.

Training generative latent models by variational f-divergence minimization

no code implementations27 Sep 2018 Mingtian Zhang, Thomas Bird, Raza Habib, Tianlin Xu, David Barber

Probabilistic models are often trained by maximum likelihood, which corresponds to minimizing a specific form of f-divergence between the model and data distribution.

Noisy Information Bottlenecks for Generalization

no code implementations27 Sep 2018 Julius Kunze, Louis Kirsch, Hippolyt Ritter, David Barber

We propose Noisy Information Bottlenecks (NIB) to limit mutual information between learned parameters and the data through noise.

Stochastic Variational Optimization

no code implementations13 Sep 2018 Thomas Bird, Julius Kunze, David Barber

These approaches are of particular interest because they are parallelizable.

Tracking by Animation: Unsupervised Learning of Multi-Object Attentive Trackers

1 code implementation CVPR 2019 Zhen He, Jian Li, Daxue Liu, Hangen He, David Barber

To achieve both label-free and end-to-end learning of MOT, we propose a Tracking-by-Animation framework, where a differentiable neural model first tracks objects from input frames and then animates these objects into reconstructed frames.

Multi-Object Tracking Online Multi-Object Tracking

Generative Neural Machine Translation

no code implementations NeurIPS 2018 Harshil Shah, David Barber

We introduce Generative Neural Machine Translation (GNMT), a latent variable architecture which is designed to model the semantics of the source and target sentences.

Machine Translation Translation

Generating Sentences Using a Dynamic Canvas

no code implementations13 Jun 2018 Harshil Shah, Bowen Zheng, David Barber

We introduce the Attentive Unsupervised Text (W)riter (AUTR), which is a word level generative model for natural language.

Gaussian mixture models with Wasserstein distance

no code implementations12 Jun 2018 Benoit Gaujac, Ilya Feige, David Barber

Generative models with both discrete and continuous latent variables are highly motivated by the structure of many real-world data sets.

Improving latent variable descriptiveness with AutoGen

no code implementations12 Jun 2018 Alex Mansbridge, Roberto Fierimonte, Ilya Feige, David Barber

Powerful generative models, particularly in Natural Language Modelling, are commonly trained by maximizing a variational lower bound on the data log likelihood.

Language Modelling

A Scalable Laplace Approximation for Neural Networks

1 code implementation ICLR 2018 Hippolyt Ritter, Aleksandar Botev, David Barber

Pytorch implementations of Bayes By Backprop, MC Dropout, SGLD, the Local Reparametrization Trick, KF-Laplace and more

Bayesian Inference

Practical Gauss-Newton Optimisation for Deep Learning

no code implementations ICML 2017 Aleksandar Botev, Hippolyt Ritter, David Barber

We present an efficient block-diagonal ap- proximation to the Gauss-Newton matrix for feedforward neural networks.

Thinking Fast and Slow with Deep Learning and Tree Search

4 code implementations NeurIPS 2017 Thomas Anthony, Zheng Tian, David Barber

Sequential decision making problems, such as structured prediction, robotic control, and game playing, require a combination of planning policies and generalisation of those plans.

Decision Making reinforcement-learning +1

Nesterov's Accelerated Gradient and Momentum as approximations to Regularised Update Descent

no code implementations7 Jul 2016 Aleksandar Botev, Guy Lever, David Barber

We present a unifying framework for adapting the update direction in gradient-based iterative optimization methods.

Dealing with a large number of classes -- Likelihood, Discrimination or Ranking?

no code implementations22 Jun 2016 David Barber, Aleksandar Botev

We consider training probabilistic classifiers in the case of a large number of classes.

On solving Ordinary Differential Equations using Gaussian Processes

no code implementations17 Aug 2014 David Barber

We describe a set of Gaussian Process based approaches that can be used to solve non-linear Ordinary Differential Equations.

Gaussian Processes

Affine Independent Variational Inference

no code implementations NeurIPS 2012 Edward Challis, David Barber

We present a method for approximate inference for a broad class of non-conjugate probabilistic models.

Variational Inference

A Unifying Perspective of Parametric Policy Search Methods for Markov Decision Processes

no code implementations NeurIPS 2012 Thomas Furmston, David Barber

This analysis leads naturally to the consideration of this approximate Newton method as an alternative gradient-based method for Markov Decision Processes.

Cannot find the paper you are looking for? You can Submit a new open access paper.