Search Results for author: Richard Turner

Found 19 papers, 7 papers with code

On Sparsity and Overcompleteness in Image Models

no code implementations NeurIPS 2007 Pietro Berkes, Richard Turner, Maneesh Sahani

Computational models of visual cortex, and in particular those based on sparse coding, have enjoyed much recent attention.

Model Selection

Occlusive Components Analysis

no code implementations NeurIPS 2009 Jörg Lücke, Richard Turner, Maneesh Sahani, Marc Henniges

We show that the object parameters can be learnt from an unlabelled set of images in which objects occlude one another.

Object

Probabilistic amplitude and frequency demodulation

no code implementations NeurIPS 2011 Richard Turner, Maneesh Sahani

A number of recent scientific and engineering problems require signals to be decomposed into a product of a slowly varying positive envelope and a quickly varying carrier whose instantaneous frequency also varies slowly over time.

Improving the Gaussian Process Sparse Spectrum Approximation by Representing Uncertainty in Frequency Inputs

1 code implementation9 Mar 2015 Yarin Gal, Richard Turner

We model the covariance function with a finite Fourier series approximation and treat it as a random variable.

Variational Inference

Invariant Models for Causal Transfer Learning

1 code implementation19 Jul 2015 Mateo Rojas-Carulla, Bernhard Schölkopf, Richard Turner, Jonas Peters

We focus on the problem of Domain Generalization, in which no examples from the test task are observed.

Domain Generalization Transfer Learning

Magnetic Hamiltonian Monte Carlo

no code implementations ICML 2017 Nilesh Tripuraneni, Mark Rowland, Zoubin Ghahramani, Richard Turner

We establish a theoretical basis for the use of non-canonical Hamiltonian dynamics in MCMC, and construct a symplectic, leapfrog-like integrator allowing for the implementation of magnetic HMC.

Overpruning in Variational Bayesian Neural Networks

no code implementations18 Jan 2018 Brian Trippe, Richard Turner

The motivations for using variational inference (VI) in neural networks differ significantly from those in latent variable models.

Variational Inference

ISA-VAE: Independent Subspace Analysis with Variational Autoencoders

no code implementations ICLR 2019 Jan Stühmer, Richard Turner, Sebastian Nowozin

Extensive quantitative and qualitative experiments demonstrate that the proposed prior mitigates the trade-off introduced by modified cost functions like beta-VAE and TCVAE between reconstruction loss and disentanglement.

Disentanglement Variational Inference

Icebreaker: Element-wise Active Information Acquisition with Bayesian Deep Latent Gaussian Model

1 code implementation13 Aug 2019 Wenbo Gong, Sebastian Tschiatschek, Richard Turner, Sebastian Nowozin, José Miguel Hernández-Lobato, Cheng Zhang

In this paper we introduce the ice-start problem, i. e., the challenge of deploying machine learning models when only little or no training data is initially available, and acquiring each feature element of data is associated with costs.

Active Learning BIG-bench Machine Learning +2

Semi-Supervised Bootstrapping of Dialogue State Trackers for Task-Oriented Modelling

no code implementations IJCNLP 2019 Bo-Hsiang Tseng, Marek Rei, Pawe{\l} Budzianowski, Richard Turner, Bill Byrne, Anna Korhonen

Dialogue systems benefit greatly from optimizing on detailed annotations, such as transcribed utterances, internal dialogue state representations and dialogue act labels.

Efficient Low Rank Gaussian Variational Inference for Neural Networks

1 code implementation NeurIPS 2020 Marcin Tomczak, Siddharth Swaroop, Richard Turner

Bayesian neural networks are enjoying a renaissance driven in part by recent advances in variational inference (VI).

Variational Inference

Efficient Gaussian Neural Processes for Regression

no code implementations22 Aug 2021 Stratis Markou, James Requeima, Wessel Bruinsma, Richard Turner

Conditional Neural Processes (CNP; Garnelo et al., 2018) are an attractive family of meta-learning models which produce well-calibrated predictions, enable fast inference at test time, and are trainable via a simple maximum likelihood procedure.

Decision Making Meta-Learning +1

Collapsed Variational Bounds for Bayesian Neural Networks

1 code implementation NeurIPS 2021 Marcin Tomczak, Siddharth Swaroop, Andrew Foong, Richard Turner

Recent interest in learning large variational Bayesian Neural Networks (BNNs) has been partly hampered by poor predictive performance caused by underfitting, and their performance is known to be very sensitive to the prior over weights.

Variational Inference

How Tight Can PAC-Bayes be in the Small Data Regime?

1 code implementation NeurIPS 2021 Andrew Foong, Wessel Bruinsma, David Burt, Richard Turner

Interestingly, this lower bound recovers the Chernoff test set bound if the posterior is equal to the prior.

Challenges and Pitfalls of Bayesian Unlearning

no code implementations7 Jul 2022 Ambrish Rawat, James Requeima, Wessel Bruinsma, Richard Turner

Machine unlearning refers to the task of removing a subset of training data, thereby removing its contributions to a trained model.

Machine Unlearning Variational Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.