Search Results for author: Richard E. Turner

Found 89 papers, 47 papers with code

A Generative Model of Symmetry Transformations

no code implementations4 Mar 2024 James Urquhart Allingham, Bruno Kacper Mlodozeniec, Shreyas Padhy, Javier Antorán, David Krueger, Richard E. Turner, Eric Nalisnick, José Miguel Hernández-Lobato

Correctly capturing the symmetry transformations of data can lead to efficient models with strong generalization capabilities, though methods incorporating symmetries often require prior knowledge.

Denoising Diffusion Probabilistic Models in Six Simple Steps

no code implementations6 Feb 2024 Richard E. Turner, Cristiana-Diana Diaconu, Stratis Markou, Aliaksandra Shysheya, Andrew Y. K. Foong, Bruno Mlodozeniec

Denoising Diffusion Probabilistic Models (DDPMs) are a very popular class of deep generative model that have been successfully applied to a diverse range of problems including image and video generation, protein and material synthesis, weather forecasting, and neural surrogates of partial differential equations.

Denoising Video Generation +1

Can We Remove the Square-Root in Adaptive Gradient Methods? A Second-Order Perspective

no code implementations5 Feb 2024 Wu Lin, Felix Dangel, Runa Eschenhagen, Juhan Bae, Richard E. Turner, Alireza Makhzani

Adaptive gradient optimizers like Adam(W) are the default training algorithms for many deep learning architectures, such as transformers.

Second-order methods

Transformer Neural Autoregressive Flows

no code implementations3 Jan 2024 Massimiliano Patacchiola, Aliaksandra Shysheya, Katja Hofmann, Richard E. Turner

In this paper, we propose a novel solution to these challenges by exploiting transformers to define a new class of neural flows called Transformer Neural Autoregressive Flows (T-NAFs).

Density Estimation

Identifiable Feature Learning for Spatial Data with Nonlinear ICA

no code implementations28 Nov 2023 Hermanni Hälvä, Jonathan So, Richard E. Turner, Aapo Hyvärinen

In this paper, we introduce a new nonlinear ICA framework that employs $t$-process (TP) latent components which apply naturally to data with higher-dimensional dependency structures, such as spatial and spatio-temporal data.

Disentanglement Variational Inference

Diffusion-Augmented Neural Processes

no code implementations16 Nov 2023 Lorenzo Bonito, James Requeima, Aliaksandra Shysheya, Richard E. Turner

Over the last few years, Neural Processes have become a useful modelling tool in many application areas, such as healthcare and climate sciences, in which data are scarce and prediction uncertainty estimates are indispensable.

Kronecker-Factored Approximate Curvature for Modern Neural Network Architectures

no code implementations NeurIPS 2023 Runa Eschenhagen, Alexander Immer, Richard E. Turner, Frank Schneider, Philipp Hennig

In this work, we identify two different settings of linear weight-sharing layers which motivate two flavours of K-FAC -- $\textit{expand}$ and $\textit{reduce}$.

Sim2Real for Environmental Neural Processes

1 code implementation30 Oct 2023 Jonas Scholz, Tom R. Andersson, Anna Vaughan, James Requeima, Richard E. Turner

On held-out weather stations, Sim2Real training substantially outperforms the same model architecture trained only with reanalysis data or only with station data, showing that reanalysis data can serve as a stepping stone for learning from real observations.

Optimising Distributions with Natural Gradient Surrogates

1 code implementation18 Oct 2023 Jonathan So, Richard E. Turner

In this work we propose a novel technique for tackling such issues, which involves reframing the optimisation as one with respect to the parameters of a surrogate distribution, for which computing the natural gradient is easy.

Variational Inference

Beyond Intuition, a Framework for Applying GPs to Real-World Data

1 code implementation6 Jul 2023 Kenza Tazi, Jihao Andreas Lin, Ross Viljoen, Alex Gardner, ST John, Hong Ge, Richard E. Turner

Gaussian Processes (GPs) offer an attractive method for regression over small, structured and correlated datasets.

Gaussian Processes regression

Comparing the Efficacy of Fine-Tuning and Meta-Learning for Few-Shot Policy Imitation

1 code implementation23 Jun 2023 Massimiliano Patacchiola, Mingfei Sun, Katja Hofmann, Richard E. Turner

Despite its simplicity this baseline is competitive with meta-learning methods on a variety of conditions and is able to imitate target policies trained on unseen variations of the original environment.

Few-Shot Image Classification Few-Shot Imitation Learning +3

An Introduction to Transformers

no code implementations20 Apr 2023 Richard E. Turner

The transformer is a neural network component that can be used to learn useful representations of sequences or sets of data-points.

Autoregressive Conditional Neural Processes

1 code implementation25 Mar 2023 Wessel P. Bruinsma, Stratis Markou, James Requiema, Andrew Y. K. Foong, Tom R. Andersson, Anna Vaughan, Anthony Buonomo, J. Scott Hosking, Richard E. Turner

Our work provides an example of how ideas from neural distribution estimation can benefit neural processes, and motivates research into the AR deployment of other neural process models.

Meta-Learning

First Session Adaptation: A Strong Replay-Free Baseline for Class-Incremental Learning

no code implementations ICCV 2023 Aristeidis Panos, Yuriko Kobe, Daniel Olmeda Reino, Rahaf Aljundi, Richard E. Turner

In this work, we develop a baseline method, First Session Adaptation (FSA), that sheds light on the efficacy of existing CIL approaches and allows us to assess the relative performance contributions from head and body adaption.

Class Incremental Learning Image Classification +1

Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning Few-Shot Meta-Learners

no code implementations23 Nov 2022 Elre T. Oldewage, John Bronskill, Richard E. Turner

This paper examines the robustness of deployed few-shot meta-learning systems when they are fed an imperceptibly perturbed few-shot dataset.

Data Poisoning Meta-Learning

Differentially private partitioned variational inference

1 code implementation23 Sep 2022 Mikko A. Heikkilä, Matthew Ashman, Siddharth Swaroop, Richard E. Turner, Antti Honkela

In this paper, we present differentially private partitioned variational inference, the first general framework for learning a variational approximation to a Bayesian posterior distribution in the federated learning setting while minimising the number of communication rounds and providing differential privacy guarantees for data subjects.

Federated Learning Privacy Preserving +1

Kernel Learning for Explainable Climate Science

1 code implementation11 Sep 2022 Vidhi Lalchand, Kenza Tazi, Talay M. Cheema, Richard E. Turner, Scott Hosking

We account for the spatial variation in precipitation with a non-stationary Gibbs kernel parameterised with an input dependent lengthscale.

Gaussian Processes

The Neural Process Family: Survey, Applications and Perspectives

1 code implementation1 Sep 2022 Saurav Jha, Dong Gong, Xuesong Wang, Richard E. Turner, Lina Yao

We shed light on their potential to bring several recent advances in other deep learning domains under one umbrella.

Gaussian Processes Meta-Learning

Contextual Squeeze-and-Excitation for Efficient Few-Shot Image Classification

1 code implementation20 Jun 2022 Massimiliano Patacchiola, John Bronskill, Aliaksandra Shysheya, Katja Hofmann, Sebastian Nowozin, Richard E. Turner

In this paper we push this Pareto frontier in the few-shot image classification setting with a key contribution: a new adaptive block called Contextual Squeeze-and-Excitation (CaSE) that adjusts a pretrained neural network on a new task to significantly improve performance with a single forward pass of the user data (context).

Few-Shot Image Classification Few-Shot Learning +1

Multi-disciplinary fairness considerations in machine learning for clinical trials

no code implementations18 May 2022 Isabel Chien, Nina Deliu, Richard E. Turner, Adrian Weller, Sofia S. Villar, Niki Kilbertus

While interest in the application of machine learning to improve healthcare has grown tremendously in recent years, a number of barriers prevent deployment in medical practice.

BIG-bench Machine Learning Fairness

Practical Conditional Neural Processes Via Tractable Dependent Predictions

no code implementations16 Mar 2022 Stratis Markou, James Requeima, Wessel P. Bruinsma, Anna Vaughan, Richard E. Turner

Existing approaches which model output dependencies, such as Neural Processes (NPs; Garnelo et al., 2018b) or the FullConvGNP (Bruinsma et al., 2021), are either complicated to train or prohibitively expensive.

Decision Making Meta-Learning

Modelling Non-Smooth Signals with Complex Spectral Structure

1 code implementation14 Mar 2022 Wessel P. Bruinsma, Martin Tegnér, Richard E. Turner

The Gaussian Process Convolution Model (GPCM; Tobar et al., 2015a) is a model for signals with complex spectral structure.

Variational Inference

Continual Novelty Detection

1 code implementation24 Jun 2021 Rahaf Aljundi, Daniel Olmeda Reino, Nikolay Chumerin, Richard E. Turner

This work identifies the crucial link between the two problems and investigates the Novelty Detection problem under the Continual Learning setting.

Continual Learning Novelty Detection

Combining Pseudo-Point and State Space Approximations for Sum-Separable Gaussian Processes

1 code implementation pproximateinference AABI Symposium 2021 Will Tebbutt, Arno Solin, Richard E. Turner

Pseudo-point approximations, one of the gold-standard methods for scaling GPs to large data sets, are well suited for handling off-the-grid spatial data.

Epidemiology Gaussian Processes +2

How Tight Can PAC-Bayes be in the Small Data Regime?

1 code implementation NeurIPS 2021 Andrew Y. K. Foong, Wessel P. Bruinsma, David R. Burt, Richard E. Turner

Interestingly, this lower bound recovers the Chernoff test set bound if the posterior is equal to the prior.

Contextual HyperNetworks for Novel Feature Adaptation

no code implementations12 Apr 2021 Angus Lamb, Evgeny Saveliev, Yingzhen Li, Sebastian Tschiatschek, Camilla Longden, Simon Woodhead, José Miguel Hernández-Lobato, Richard E. Turner, Pashmina Cameron, Cheng Zhang

While deep learning has obtained state-of-the-art results in many applications, the adaptation of neural network architectures to incorporate new output features remains a challenge, as neural networks are commonly trained to produce a fixed output dimension.

Few-Shot Learning Imputation +1

Convolutional conditional neural processes for local climate downscaling

1 code implementation20 Jan 2021 Anna Vaughan, Will Tebbutt, J. Scott Hosking, Richard E. Turner

A new model is presented for multisite statistical downscaling of temperature and precipitation using convolutional conditional neural processes (convCNPs).

Gaussian Processes

The Gaussian Neural Process

1 code implementation pproximateinference AABI Symposium 2021 Wessel P. Bruinsma, James Requeima, Andrew Y. K. Foong, Jonathan Gordon, Richard E. Turner

Neural Processes (NPs; Garnelo et al., 2018a, b) are a rich class of models for meta-learning that map data sets directly to predictive stochastic processes.

Meta-Learning Translation

Generalized Variational Continual Learning

no code implementations ICLR 2021 Noel Loo, Siddharth Swaroop, Richard E. Turner

One strand of research has used probabilistic regularization for continual learning, with two of the main approaches in this vein being Online Elastic Weight Consolidation (Online EWC) and Variational Continual Learning (VCL).

Continual Learning Variational Inference

Interpreting Spatially Infinite Generative Models

no code implementations24 Jul 2020 Chaochao Lu, Richard E. Turner, Yingzhen Li, Nate Kushman

In this paper we provide a firm theoretical interpretation for infinite spatial generation, by drawing connections to spatial stochastic processes.

Generative Adversarial Network Texture Synthesis

Instructions and Guide for Diagnostic Questions: The NeurIPS 2020 Education Challenge

no code implementations23 Jul 2020 Zichao Wang, Angus Lamb, Evgeny Saveliev, Pashmina Cameron, Yordan Zaykov, José Miguel Hernández-Lobato, Richard E. Turner, Richard G. Baraniuk, Craig Barton, Simon Peyton Jones, Simon Woodhead, Cheng Zhang

In this competition, participants will focus on the students' answer records to these multiple-choice diagnostic questions, with the aim of 1) accurately predicting which answers the students provide; 2) accurately predicting which questions have high quality; and 3) determining a personalized sequence of questions for each student that best predicts the student's answers.

Misconceptions Multiple-choice

Continual Deep Learning by Functional Regularisation of Memorable Past

1 code implementation NeurIPS 2020 Pingbo Pan, Siddharth Swaroop, Alexander Immer, Runa Eschenhagen, Richard E. Turner, Mohammad Emtiyaz Khan

Continually learning new skills is important for intelligent systems, yet standard deep learning methods suffer from catastrophic forgetting of the past.

TaskNorm: Rethinking Batch Normalization for Meta-Learning

2 code implementations ICML 2020 John Bronskill, Jonathan Gordon, James Requeima, Sebastian Nowozin, Richard E. Turner

Modern meta-learning approaches for image classification rely on increasingly deep networks to achieve state-of-the-art performance, making batch normalization an essential component of meta-learning pipelines.

General Classification Image Classification +1

Icebreaker: Element-wise Efficient Information Acquisition with a Bayesian Deep Latent Gaussian Model

1 code implementation NeurIPS 2019 Wenbo Gong, Sebastian Tschiatschek, Sebastian Nowozin, Richard E. Turner, José Miguel Hernández-Lobato, Cheng Zhang

In this paper, we address the ice-start problem, i. e., the challenge of deploying machine learning models when only a little or no training data is initially available, and acquiring each feature element of data is associated with costs.

BIG-bench Machine Learning Imputation +1

Semi-supervised Bootstrapping of Dialogue State Trackers for Task Oriented Modelling

no code implementations26 Nov 2019 Bo-Hsiang Tseng, Marek Rei, Paweł Budzianowski, Richard E. Turner, Bill Byrne, Anna Korhonen

Dialogue systems benefit greatly from optimizing on detailed annotations, such as transcribed utterances, internal dialogue state representations and dialogue act labels.

Continual Learning with Adaptive Weights (CLAW)

no code implementations ICLR 2020 Tameem Adel, Han Zhao, Richard E. Turner

Approaches to continual learning aim to successfully learn a set of related tasks that arrive in an online manner.

Continual Learning Transfer Learning +1

Scalable Exact Inference in Multi-Output Gaussian Processes

1 code implementation ICML 2020 Wessel P. Bruinsma, Eric Perim, Will Tebbutt, J. Scott Hosking, Arno Solin, Richard E. Turner

Multi-output Gaussian processes (MOGPs) leverage the flexibility and interpretability of GPs while capturing structure across outputs, which is desirable, for example, in spatio-temporal modelling.

Gaussian Processes

Convolutional Conditional Neural Processes

3 code implementations ICLR 2020 Jonathan Gordon, Wessel P. Bruinsma, Andrew Y. K. Foong, James Requeima, Yann Dubois, Richard E. Turner

We introduce the Convolutional Conditional Neural Process (ConvCNP), a new member of the Neural Process family that models translation equivariance in the data.

Inductive Bias Time Series +3

Independent Subspace Analysis for Unsupervised Learning of Disentangled Representations

no code implementations5 Sep 2019 Jan Stühmer, Richard E. Turner, Sebastian Nowozin

Second, we demonstrate that the proposed prior encourages a disentangled latent representation which facilitates learning of disentangled representations.

Disentanglement Variational Inference

On the Expressiveness of Approximate Inference in Bayesian Neural Networks

2 code implementations NeurIPS 2020 Andrew Y. K. Foong, David R. Burt, Yingzhen Li, Richard E. Turner

While Bayesian neural networks (BNNs) hold the promise of being flexible, well-calibrated statistical models, inference often requires approximations whose consequences are poorly understood.

Active Learning Bayesian Inference +3

'In-Between' Uncertainty in Bayesian Neural Networks

no code implementations27 Jun 2019 Andrew Y. K. Foong, Yingzhen Li, José Miguel Hernández-Lobato, Richard E. Turner

We describe a limitation in the expressiveness of the predictive uncertainty estimate given by mean-field variational inference (MFVI), a popular approximate inference method for Bayesian neural networks.

Active Learning Bayesian Optimisation +1

Fast and Flexible Multi-Task Classification Using Conditional Neural Adaptive Processes

1 code implementation NeurIPS 2019 James Requeima, Jonathan Gordon, John Bronskill, Sebastian Nowozin, Richard E. Turner

We introduce a conditional neural process based approach to the multi-task classification setting for this purpose, and establish connections to the meta-learning and few-shot learning literature.

Active Learning Continual Learning +4

Practical Deep Learning with Bayesian Principles

1 code implementation NeurIPS 2019 Kazuki Osawa, Siddharth Swaroop, Anirudh Jain, Runa Eschenhagen, Richard E. Turner, Rio Yokota, Mohammad Emtiyaz Khan

Importantly, the benefits of Bayesian principles are preserved: predictive probabilities are well-calibrated, uncertainties on out-of-distribution data are improved, and continual-learning performance is boosted.

Continual Learning Data Augmentation +1

Fast computation of loudness using a deep neural network

no code implementations24 May 2019 Josef Schlittenlacher, Richard E. Turner, Brian C. J. Moore

The DNN was trained using the output of a more complex model, called the Cambridge loudness model.

Infinite-Horizon Gaussian Processes

1 code implementation NeurIPS 2018 Arno Solin, James Hensman, Richard E. Turner

The complexity is still cubic in the state dimension $m$ which is an impediment to practical application.

Gaussian Processes

Deterministic Variational Inference for Robust Bayesian Neural Networks

3 code implementations ICLR 2019 Anqi Wu, Sebastian Nowozin, Edward Meeds, Richard E. Turner, José Miguel Hernández-Lobato, Alexander L. Gaunt

We provide two innovations that aim to turn VB into a robust inference tool for Bayesian neural networks: first, we introduce a novel deterministic method to approximate moments in neural networks, eliminating gradient variance; second, we introduce a hierarchical prior for parameters and a novel Empirical Bayes procedure for automatically selecting prior variances.

Variational Inference

Meta-Learning Probabilistic Inference For Prediction

1 code implementation ICLR 2019 Jonathan Gordon, John Bronskill, Matthias Bauer, Sebastian Nowozin, Richard E. Turner

2) We introduce VERSA, an instance of the framework employing a flexible and versatile amortization network that takes few-shot learning datasets as inputs, with arbitrary numbers of shots, and outputs a distribution over task-specific parameters in a single forward pass.

Few-Shot Image Classification Few-Shot Learning

Nonlinear ICA Using Auxiliary Variables and Generalized Contrastive Learning

1 code implementation22 May 2018 Aapo Hyvarinen, Hiroaki Sasaki, Richard E. Turner

Here, we propose a general framework for nonlinear ICA, which, as a special case, can make use of temporal structure.

Contrastive Learning Representation Learning +2

Gaussian Process Behaviour in Wide Deep Neural Networks

2 code implementations ICLR 2018 Alexander G. de G. Matthews, Mark Rowland, Jiri Hron, Richard E. Turner, Zoubin Ghahramani

Whilst deep neural networks have shown great empirical success, there is still much work to be done to understand their theoretical properties.

Gaussian Processes

Structured Evolution with Compact Architectures for Scalable Policy Optimization

no code implementations ICML 2018 Krzysztof Choromanski, Mark Rowland, Vikas Sindhwani, Richard E. Turner, Adrian Weller

We present a new method of blackbox optimization via gradient approximation with the use of structured random orthogonal matrices, providing more accurate estimators than baselines and with provable theoretical guarantees.

OpenAI Gym Text-to-Image Generation

The Mirage of Action-Dependent Baselines in Reinforcement Learning

1 code implementation ICML 2018 George Tucker, Surya Bhupatiraju, Shixiang Gu, Richard E. Turner, Zoubin Ghahramani, Sergey Levine

Policy gradient methods are a widely used class of model-free reinforcement learning algorithms where a state-dependent baseline is used to reduce gradient estimator variance.

Policy Gradient Methods reinforcement-learning +1

Learning Causally-Generated Stationary Time Series

no code implementations22 Feb 2018 Wessel Bruinsma, Richard E. Turner

We present the Causal Gaussian Process Convolution Model (CGPCM), a doubly nonparametric model for causal, spectrally complex dynamical phenomena.

Time Series Time Series Analysis +1

The Gaussian Process Autoregressive Regression Model (GPAR)

1 code implementation20 Feb 2018 James Requeima, Will Tebbutt, Wessel Bruinsma, Richard E. Turner

Multi-output regression models must exploit dependencies between outputs to maximise predictive performance.

Gaussian Processes regression

Variational Continual Learning

8 code implementations ICLR 2018 Cuong V. Nguyen, Yingzhen Li, Thang D. Bui, Richard E. Turner

This paper develops variational continual learning (VCL), a simple but general framework for continual learning that fuses online variational inference (VI) and recent advances in Monte Carlo VI for neural networks.

Continual Learning Variational Inference

Streaming Sparse Gaussian Process Approximations

3 code implementations NeurIPS 2017 Thang D. Bui, Cuong V. Nguyen, Richard E. Turner

Sparse pseudo-point approximations for Gaussian process (GP) models provide a suite of methods that support deployment of GPs in the large data regime and enable analytic intractabilities to be sidestepped.

Gradient Estimators for Implicit Models

1 code implementation ICLR 2018 Yingzhen Li, Richard E. Turner

Implicit models, which allow for the generation of samples but not for point-wise evaluation of probabilities, are omnipresent in real-world problems tackled by machine learning and a hot topic of current research.

Image Generation Meta-Learning

Approximate Inference with Amortised MCMC

no code implementations27 Feb 2017 Yingzhen Li, Richard E. Turner, Qiang Liu

We propose a novel approximate inference algorithm that approximates a target distribution by amortising the dynamics of a user-selected MCMC sampler.

Sequence Tutor: Conservative Fine-Tuning of Sequence Generation Models with KL-control

no code implementations ICML 2017 Natasha Jaques, Shixiang Gu, Dzmitry Bahdanau, José Miguel Hernández-Lobato, Richard E. Turner, Douglas Eck

This paper proposes a general method for improving the structure and quality of sequences generated by a recurrent neural network (RNN), while maintaining information originally learned from data, as well as sample diversity.

Reinforcement Learning (RL)

Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic

2 code implementations7 Nov 2016 Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine

We analyze the connection between Q-Prop and existing model-free algorithms, and use control variate theory to derive two variants of Q-Prop with conservative and aggressive adaptation.

Continuous Control Policy Gradient Methods +2

A Unifying Framework for Gaussian Process Pseudo-Point Approximations using Power Expectation Propagation

1 code implementation23 May 2016 Thang D. Bui, Josiah Yan, Richard E. Turner

Unlike much of the previous venerable work in this area, the new framework is built on standard methods for approximate inference (variational free-energy, EP and Power EP methods) rather than employing approximations to the probabilistic generative model itself.

Gaussian Processes

The Multivariate Generalised von Mises distribution: Inference and applications

no code implementations16 Feb 2016 Alexandre K. W. Navarro, Jes Frellsen, Richard E. Turner

First we introduce a new multivariate distribution over circular variables, called the multivariate Generalised von Mises (mGvM) distribution.

Gaussian Processes

Deep Gaussian Processes for Regression using Approximate Expectation Propagation

no code implementations12 Feb 2016 Thang D. Bui, Daniel Hernández-Lobato, Yingzhen Li, José Miguel Hernández-Lobato, Richard E. Turner

Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations of Gaussian processes (GPs) and are formally equivalent to neural networks with multiple, infinitely wide hidden layers.

Gaussian Processes regression

Rényi Divergence Variational Inference

2 code implementations NeurIPS 2016 Yingzhen Li, Richard E. Turner

This paper introduces the variational R\'enyi bound (VR) that extends traditional variational inference to R\'enyi's alpha-divergences.

Variational Inference

Learning Stationary Time Series using Gaussian Processes with Nonparametric Kernels

no code implementations NeurIPS 2015 Felipe Tobar, Thang D. Bui, Richard E. Turner

We introduce the Gaussian Process Convolution Model (GPCM), a two-stage nonparametric generative procedure to model stationary signals as the convolution between a continuous-time white-noise process and a continuous-time linear filter drawn from Gaussian process.

Denoising Gaussian Processes +3

Training Deep Gaussian Processes using Stochastic Expectation Propagation and Probabilistic Backpropagation

no code implementations11 Nov 2015 Thang D. Bui, José Miguel Hernández-Lobato, Yingzhen Li, Daniel Hernández-Lobato, Richard E. Turner

Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations of Gaussian processes (GPs) and are formally equivalent to neural networks with multiple, infinitely wide hidden layers.

Gaussian Processes

Black-box $α$-divergence Minimization

3 code implementations10 Nov 2015 José Miguel Hernández-Lobato, Yingzhen Li, Mark Rowland, Daniel Hernández-Lobato, Thang Bui, Richard E. Turner

Black-box alpha (BB-$\alpha$) is a new approximate inference method based on the minimization of $\alpha$-divergences.

General Classification regression

Denoising without access to clean data using a partitioned autoencoder

no code implementations20 Sep 2015 Dan Stowell, Richard E. Turner

Training a denoising autoencoder neural network requires access to truly clean data, a requirement which is often impractical.

Denoising

Stochastic Expectation Propagation

no code implementations NeurIPS 2015 Yingzhen Li, Jose Miguel Hernandez-Lobato, Richard E. Turner

Expectation propagation (EP) is a deterministic approximation algorithm that is often used to perform approximate Bayesian parameter learning.

Variational Inference

Neural Adaptive Sequential Monte Carlo

no code implementations NeurIPS 2015 Shixiang Gu, Zoubin Ghahramani, Richard E. Turner

Experiments indicate that NASMC significantly improves inference in a non-linear state space model outperforming adaptive proposal methods including the Extended Kalman and Unscented Particle Filters.

Variational Inference

On Sparse variational methods and the Kullback-Leibler divergence between stochastic processes

no code implementations27 Apr 2015 Alexander G. de G. Matthews, James Hensman, Richard E. Turner, Zoubin Ghahramani

We then discuss augmented index sets and show that, contrary to previous works, marginal consistency of augmentation is not enough to guarantee consistency of variational inference with the original model.

Variational Inference

Tree-structured Gaussian Process Approximations

no code implementations NeurIPS 2014 Thang D. Bui, Richard E. Turner

Gaussian process regression can be accelerated by constructing a small pseudo-dataset to summarise the observed data.

Imputation regression +2

Target Fishing: A Single-Label or Multi-Label Problem?

no code implementations23 Nov 2014 Avid M. Afzal, Hamse Y. Mussa, Richard E. Turner, Andreas Bender, Robert C. Glen

According to Cobanoglu et al and Murphy, it is now widely acknowledged that the single target paradigm (one protein or target, one disease, one drug) that has been the dominant premise in drug development in the recent past is untenable.

General Classification Multi-class Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.