Search Results for author: Tom Rainforth

Found 56 papers, 36 papers with code

Rethinking Variational Inference for Probabilistic Programs with Stochastic Support

1 code implementation1 Nov 2023 Tim Reichelt, Luke Ong, Tom Rainforth

We introduce Support Decomposition Variational Inference (SDVI), a new variational inference (VI) approach for probabilistic programs with stochastic support.

Variational Inference

Beyond Bayesian Model Averaging over Paths in Probabilistic Programs with Stochastic Support

1 code implementation23 Oct 2023 Tim Reichelt, Luke Ong, Tom Rainforth

The posterior in probabilistic programs with stochastic support decomposes as a weighted sum of the local posterior distributions associated with each possible program path.

SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning

1 code implementation1 Aug 2023 Ning Miao, Yee Whye Teh, Tom Rainforth

The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning.

GSM8K Math +1

In-Context Learning Learns Label Relationships but Is Not Conventional Learning

1 code implementation23 Jul 2023 Jannik Kossen, Yarin Gal, Tom Rainforth

The predictions of Large Language Models (LLMs) on downstream tasks often improve significantly when including examples of the input--label relationship in the context.

On the Expected Size of Conformal Prediction Sets

no code implementations12 Jun 2023 Guneet S. Dhillon, George Deligiannidis, Tom Rainforth

While conformal predictors reap the benefits of rigorous statistical guarantees for their error frequency, the size of their corresponding prediction sets is critical to their practical utility.

Conformal Prediction

Prediction-Oriented Bayesian Active Learning

1 code implementation17 Apr 2023 Freddie Bickford Smith, Andreas Kirsch, Sebastian Farquhar, Yarin Gal, Adam Foster, Tom Rainforth

Information-theoretic approaches to active learning have traditionally focused on maximising the information gathered about the model parameters, most commonly by optimising the BALD score.

Active Learning

Modern Bayesian Experimental Design

no code implementations28 Feb 2023 Tom Rainforth, Adam Foster, Desi R Ivanova, Freddie Bickford Smith

Bayesian experimental design (BED) provides a powerful and general framework for optimizing the design of experiments.

Experimental Design

CO-BED: Information-Theoretic Contextual Optimization via Bayesian Experimental Design

1 code implementation27 Feb 2023 Desi R. Ivanova, Joel Jennings, Tom Rainforth, Cheng Zhang, Adam Foster

We formalize the problem of contextual optimization through the lens of Bayesian experimental design and propose CO-BED -- a general, model-agnostic framework for designing contextual experiments using information-theoretic principles.

Experimental Design

Do Bayesian Neural Networks Need To Be Fully Stochastic?

1 code implementation11 Nov 2022 Mrinank Sharma, Sebastian Farquhar, Eric Nalisnick, Tom Rainforth

We investigate the benefit of treating all the parameters in a Bayesian neural network stochastically and find compelling theoretical and empirical evidence that this standard construction may be unnecessary.

A Continuous Time Framework for Discrete Denoising Models

1 code implementation30 May 2022 Andrew Campbell, Joe Benton, Valentin De Bortoli, Tom Rainforth, George Deligiannidis, Arnaud Doucet

We provide the first complete continuous time framework for denoising diffusion models of discrete data.


Implicit Deep Adaptive Design: Policy-Based Experimental Design without Likelihoods

1 code implementation NeurIPS 2021 Desi R. Ivanova, Adam Foster, Steven Kleinegesse, Michael U. Gutmann, Tom Rainforth

We introduce implicit Deep Adaptive Design (iDAD), a new method for performing adaptive experiments in real-time with implicit models.

Experimental Design

Online Variational Filtering and Parameter Learning

1 code implementation NeurIPS 2021 Andrew Campbell, Yuyang Shi, Tom Rainforth, Arnaud Doucet

We present a variational method for online state estimation and parameter learning in state-space models (SSMs), a ubiquitous class of latent variable models for sequential data.

On Incorporating Inductive Biases into VAEs

1 code implementation ICLR 2022 Ning Miao, Emile Mathieu, N. Siddharth, Yee Whye Teh, Tom Rainforth

InteL-VAEs use an intermediary set of latent variables to control the stochasticity of the encoding process, before mapping these in turn to the latent representation using a parametric function that encapsulates our desired inductive bias(es).

Inductive Bias

Learning Multimodal VAEs through Mutual Supervision

1 code implementation ICLR 2022 Tom Joy, Yuge Shi, Philip H. S. Torr, Tom Rainforth, Sebastian M. Schmon, N. Siddharth

Here we introduce a novel alternative, the MEME, that avoids such explicit combinations by repurposing semi-supervised VAEs to combine information between modalities implicitly through mutual supervision.

Test Distribution-Aware Active Learning: A Principled Approach Against Distribution Shift and Outliers

no code implementations22 Jun 2021 Andreas Kirsch, Tom Rainforth, Yarin Gal

Expanding on MacKay (1992), we argue that conventional model-based methods for active learning - like BALD - have a fundamental shortfall: they fail to directly account for the test-time distribution of the input variables.

Active Learning Test

Group Equivariant Subsampling

1 code implementation NeurIPS 2021 Jin Xu, Hyunjik Kim, Tom Rainforth, Yee Whye Teh

We use these layers to construct group equivariant autoencoders (GAEs) that allow us to learn low-dimensional equivariant representations.


Expectation Programming: Adapting Probabilistic Programming Systems to Estimate Expectations Efficiently

no code implementations pproximateinference AABI Symposium 2021 Tim Reichelt, Adam Goliński, Luke Ong, Tom Rainforth

We show that the standard computational pipeline of probabilistic programming systems (PPSs) can be inefficient for estimating expectations and introduce the concept of expectation programming to address this.

Probabilistic Programming

Self-Attention Between Datapoints: Going Beyond Individual Input-Output Pairs in Deep Learning

2 code implementations NeurIPS 2021 Jannik Kossen, Neil Band, Clare Lyle, Aidan N. Gomez, Tom Rainforth, Yarin Gal

We challenge a common assumption underlying most supervised deep learning: that a model makes a prediction depending only on its parameters and the features of a single input.

3D Part Segmentation

Active Testing: Sample-Efficient Model Evaluation

1 code implementation9 Mar 2021 Jannik Kossen, Sebastian Farquhar, Yarin Gal, Tom Rainforth

While approaches like active learning reduce the number of labels needed for model training, existing literature largely ignores the cost of labeling test data, typically unrealistically assuming large test sets for model evaluation.

Active Learning Gaussian Processes +1

Deep Adaptive Design: Amortizing Sequential Bayesian Experimental Design

1 code implementation3 Mar 2021 Adam Foster, Desi R. Ivanova, Ilyas Malik, Tom Rainforth

We introduce Deep Adaptive Design (DAD), a method for amortizing the cost of adaptive Bayesian experimental design that allows experiments to be run in real-time.

Experimental Design

Certifiably Robust Variational Autoencoders

no code implementations15 Feb 2021 Ben Barrett, Alexander Camuto, Matthew Willetts, Tom Rainforth

We introduce an approach for training Variational Autoencoders (VAEs) that are certifiably robust to adversarial attack.

Adversarial Attack

On Statistical Bias In Active Learning: How and When To Fix It

no code implementations ICLR 2021 Sebastian Farquhar, Yarin Gal, Tom Rainforth

Active learning is a powerful tool when labelling data is expensive, but it introduces a bias because the training data no longer follows the population distribution.

Active Learning

On Signal-to-Noise Ratio Issues in Variational Inference for Deep Gaussian Processes

1 code implementation1 Nov 2020 Tim G. J. Rudner, Oscar Key, Yarin Gal, Tom Rainforth

We show that the gradient estimates used in training Deep Gaussian Processes (DGPs) with importance-weighted variational inference are susceptible to signal-to-noise ratio (SNR) issues.

Gaussian Processes Variational Inference

Probabilistic Programs with Stochastic Conditioning

1 code implementation1 Oct 2020 David Tolpin, Yuan Zhou, Tom Rainforth, Hongseok Yang

We tackle the problem of conditioning probabilistic programs on distributions of observable variables.

Probabilistic Programming

Towards a Theoretical Understanding of the Robustness of Variational Autoencoders

no code implementations14 Jul 2020 Alexander Camuto, Matthew Willetts, Stephen Roberts, Chris Holmes, Tom Rainforth

We make inroads into understanding the robustness of Variational Autoencoders (VAEs) to adversarial attacks and other input perturbations.

Capturing Label Characteristics in VAEs

2 code implementations ICLR 2021 Tom Joy, Sebastian M. Schmon, Philip H. S. Torr, N. Siddharth, Tom Rainforth

We present a principled approach to incorporating labels in VAEs that captures the rich characteristic information associated with those labels.

Statistically Robust Neural Network Classification

1 code implementation10 Dec 2019 Benjie Wang, Stefan Webb, Tom Rainforth

The SRR provides a distinct and complementary measure of robust performance, compared to natural and adversarial risk.

Classification General Classification +1

Efficient Bayesian Inference for Nested Simulators

no code implementations pproximateinference AABI Symposium 2019 Bradley Gram-Hansen, Christian Schroeder de Witt, Robert Zinkov, Saeid Naderiparizi, Adam Scibior, Andreas Munk, Frank Wood, Mehrdad Ghadiri, Philip Torr, Yee Whye Teh, Atilim Gunes Baydin, Tom Rainforth

We introduce two approaches for conducting efficient Bayesian inference in stochastic simulators containing nested stochastic sub-procedures, i. e., internal procedures for which the density cannot be calculated directly such as rejection sampling loops.

Bayesian Inference

Amortized Monte Carlo Integration

1 code implementation18 Jul 2019 Adam Goliński, Frank Wood, Tom Rainforth

At runtime, samples are produced separately from each amortized proposal, before being combined to an overall estimate of the expectation.

Bayesian Inference

Improving VAEs' Robustness to Adversarial Attack

no code implementations ICLR 2021 Matthew Willetts, Alexander Camuto, Tom Rainforth, Stephen Roberts, Chris Holmes

We make significant advances in addressing this issue by introducing methods for producing adversarially robust VAEs.

Adversarial Attack

On the Fairness of Disentangled Representations

no code implementations NeurIPS 2019 Francesco Locatello, Gabriele Abbati, Tom Rainforth, Stefan Bauer, Bernhard Schölkopf, Olivier Bachem

Recently there has been a significant interest in learning disentangled representations, as they promise increased interpretability, generalization to unseen scenarios and faster learning on downstream tasks.

Disentanglement Fairness

Hijacking Malaria Simulators with Probabilistic Programming

no code implementations29 May 2019 Bradley Gram-Hansen, Christian Schröder de Witt, Tom Rainforth, Philip H. S. Torr, Yee Whye Teh, Atılım Güneş Baydin

Epidemiology simulations have become a fundamental tool in the fight against the epidemics of various infectious diseases like AIDS and malaria.

Epidemiology Probabilistic Programming

LF-PPL: A Low-Level First Order Probabilistic Programming Language for Non-Differentiable Models

1 code implementation6 Mar 2019 Yuan Zhou, Bradley J. Gram-Hansen, Tobias Kohn, Tom Rainforth, Hongseok Yang, Frank Wood

We develop a new Low-level, First-order Probabilistic Programming Language (LF-PPL) suited for models containing a mix of continuous, discrete, and/or piecewise-continuous variables.

Probabilistic Programming

Disentangling Disentanglement in Variational Autoencoders

1 code implementation6 Dec 2018 Emile Mathieu, Tom Rainforth, N. Siddharth, Yee Whye Teh

We develop a generalisation of disentanglement in VAEs---decomposition of the latent representation---characterising it as the fulfilment of two factors: a) the latent encodings of the data having an appropriate level of overlap, and b) the aggregate encoding of the data conforming to a desired structure, represented through the prior.

Clustering Disentanglement

A Statistical Approach to Assessing Neural Network Robustness

1 code implementation ICLR 2019 Stefan Webb, Tom Rainforth, Yee Whye Teh, M. Pawan Kumar

Furthermore, it provides an ability to scale to larger networks than formal verification approaches.

On Exploration, Exploitation and Learning in Adaptive Importance Sampling

no code implementations31 Oct 2018 Xiaoyu Lu, Tom Rainforth, Yuan Zhou, Jan-Willem van de Meent, Yee Whye Teh

We study adaptive importance sampling (AIS) as an online learning problem and argue for the importance of the trade-off between exploration and exploitation in this adaptation.

Inference Trees: Adaptive Inference with Exploration

no code implementations25 Jun 2018 Tom Rainforth, Yuan Zhou, Xiaoyu Lu, Yee Whye Teh, Frank Wood, Hongseok Yang, Jan-Willem van de Meent

We introduce inference trees (ITs), a new class of inference methods that build on ideas from Monte Carlo tree search to perform adaptive sampling in a manner that balances exploration with exploitation, ensures consistency, and alleviates pathologies in existing adaptive methods.

Hamiltonian Monte Carlo for Probabilistic Programs with Discontinuities

1 code implementation7 Apr 2018 Bradley Gram-Hansen, Yuan Zhou, Tobias Kohn, Tom Rainforth, Hongseok Yang, Frank Wood

Hamiltonian Monte Carlo (HMC) is arguably the dominant statistical inference algorithm used in most popular "first-order differentiable" Probabilistic Programming Languages (PPLs).

Probabilistic Programming

Nesting Probabilistic Programs

no code implementations16 Mar 2018 Tom Rainforth

We formalize the notion of nesting probabilistic programming queries and investigate the resulting statistical implications.

Probabilistic Programming

Tighter Variational Bounds are Not Necessarily Better

3 code implementations ICML 2018 Tom Rainforth, Adam R. Kosiorek, Tuan Anh Le, Chris J. Maddison, Maximilian Igl, Frank Wood, Yee Whye Teh

We provide theoretical and empirical evidence that using tighter evidence lower bounds (ELBOs) can be detrimental to the process of learning an inference network by reducing the signal-to-noise ratio of the gradient estimator.

Faithful Inversion of Generative Models for Effective Amortized Inference

no code implementations NeurIPS 2018 Stefan Webb, Adam Golinski, Robert Zinkov, N. Siddharth, Tom Rainforth, Yee Whye Teh, Frank Wood

Inference amortization methods share information across multiple posterior-inference problems, allowing each to be carried out more efficiently.

On Nesting Monte Carlo Estimators

no code implementations ICML 2018 Tom Rainforth, Robert Cornish, Hongseok Yang, Andrew Warrington, Frank Wood

Many problems in machine learning and statistics involve nested expectations and thus do not permit conventional Monte Carlo (MC) estimation.

Experimental Design

Bayesian Optimization for Probabilistic Programs

2 code implementations NeurIPS 2016 Tom Rainforth, Tuan Anh Le, Jan-Willem van de Meent, Michael A. Osborne, Frank Wood

We present the first general purpose framework for marginal maximum a posteriori estimation of probabilistic program variables.

Bayesian Optimization

Auto-Encoding Sequential Monte Carlo

1 code implementation ICLR 2018 Tuan Anh Le, Maximilian Igl, Tom Rainforth, Tom Jin, Frank Wood

We build on auto-encoding sequential Monte Carlo (AESMC): a method for model and proposal learning based on maximizing the lower bound to the log marginal likelihood in a broad family of structured probabilistic models.

On the Pitfalls of Nested Monte Carlo

no code implementations3 Dec 2016 Tom Rainforth, Robert Cornish, Hongseok Yang, Frank Wood

In this paper, we analyse the behaviour of nested Monte Carlo (NMC) schemes, for which classical convergence proofs are insufficient.

Probabilistic structure discovery in time series data

no code implementations21 Nov 2016 David Janz, Brooks Paige, Tom Rainforth, Jan-Willem van de Meent, Frank Wood

Existing methods for structure discovery in time series data construct interpretable, compositional kernels for Gaussian process regression models.

regression Time Series +1

Interacting Particle Markov Chain Monte Carlo

1 code implementation16 Feb 2016 Tom Rainforth, Christian A. Naesseth, Fredrik Lindsten, Brooks Paige, Jan-Willem van de Meent, Arnaud Doucet, Frank Wood

We introduce interacting particle Markov chain Monte Carlo (iPMCMC), a PMCMC method based on an interacting pool of standard and conditional sequential Monte Carlo samplers.

Canonical Correlation Forests

3 code implementations20 Jul 2015 Tom Rainforth, Frank Wood

We introduce canonical correlation forests (CCFs), a new decision tree ensemble method for classification and regression.

General Classification regression

Cannot find the paper you are looking for? You can Submit a new open access paper.