Search Results for author: James Requeima

Found 21 papers, 12 papers with code

JoLT: Joint Probabilistic Predictions on Tabular Data Using LLMs

2 code implementations17 Feb 2025 Aliaksandra Shysheya, John Bronskill, James Requeima, Shoaib Ahmed Siddiqui, Javier Gonzalez, David Duvenaud, Richard E. Turner

We introduce a simple method for probabilistic predictions on tabular data based on Large Language Models (LLMs) called JoLT (Joint LLM Process for Tabular data).

Imputation In-Context Learning +1

A Meta-Learning Approach to Bayesian Causal Discovery

no code implementations21 Dec 2024 Anish Dhir, Matthew Ashman, James Requeima, Mark van der Wilk

To address these limitations, we propose a Bayesian meta learning model that allows for sampling causal structures from the posterior and encodes these key properties.

Causal Discovery Meta-Learning

Context is Key: A Benchmark for Forecasting with Essential Textual Information

1 code implementation24 Oct 2024 Andrew Robert Williams, Arjun Ashok, Étienne Marcotte, Valentina Zantedeschi, Jithendaraa Subramanian, Roland Riachi, James Requeima, Alexandre Lacoste, Irina Rish, Nicolas Chapados, Alexandre Drouin

To address this, we introduce "Context is Key" (CiK), a time series forecasting benchmark that pairs numerical data with diverse types of carefully crafted textual context, requiring models to integrate both modalities.

Decision Making Time Series +1

Translation Equivariant Transformer Neural Processes

1 code implementation18 Jun 2024 Matthew Ashman, Cristiana Diaconu, Junhyuck Kim, Lakee Sivaraya, Stratis Markou, James Requeima, Wessel P. Bruinsma, Richard E. Turner

Notably, the posterior prediction maps for data that are stationary -- a common assumption in spatio-temporal modelling -- exhibit translation equivariance.

Translation

LLM Processes: Numerical Predictive Distributions Conditioned on Natural Language

1 code implementation21 May 2024 James Requeima, John Bronskill, Dami Choi, Richard E. Turner, David Duvenaud

Machine learning practitioners often face significant challenges in formally integrating their prior knowledge and beliefs into predictive models, limiting the potential for nuanced and context-aware analyses.

regression

Aardvark weather: end-to-end data-driven weather forecasting

no code implementations30 Mar 2024 Anna Vaughan, Stratis Markou, Will Tebbutt, James Requeima, Wessel P. Bruinsma, Tom R. Andersson, Michael Herzog, Nicholas D. Lane, Matthew Chantry, J. Scott Hosking, Richard E. Turner

Local station forecasts are skillful up to ten days lead time and achieve comparable and often lower errors than a post-processed global NWP baseline and a state-of-the-art end-to-end forecasting system with input from human forecasters.

Weather Forecasting

Diffusion-Augmented Neural Processes

no code implementations16 Nov 2023 Lorenzo Bonito, James Requeima, Aliaksandra Shysheya, Richard E. Turner

Over the last few years, Neural Processes have become a useful modelling tool in many application areas, such as healthcare and climate sciences, in which data are scarce and prediction uncertainty estimates are indispensable.

Sim2Real for Environmental Neural Processes

1 code implementation30 Oct 2023 Jonas Scholz, Tom R. Andersson, Anna Vaughan, James Requeima, Richard E. Turner

On held-out weather stations, Sim2Real training substantially outperforms the same model architecture trained only with reanalysis data or only with station data, showing that reanalysis data can serve as a stepping stone for learning from real observations.

Challenges and Pitfalls of Bayesian Unlearning

no code implementations7 Jul 2022 Ambrish Rawat, James Requeima, Wessel Bruinsma, Richard Turner

Machine unlearning refers to the task of removing a subset of training data, thereby removing its contributions to a trained model.

Machine Unlearning Variational Inference

Practical Conditional Neural Processes Via Tractable Dependent Predictions

no code implementations16 Mar 2022 Stratis Markou, James Requeima, Wessel P. Bruinsma, Anna Vaughan, Richard E. Turner

Existing approaches which model output dependencies, such as Neural Processes (NPs; Garnelo et al., 2018b) or the FullConvGNP (Bruinsma et al., 2021), are either complicated to train or prohibitively expensive.

Decision Making Meta-Learning

Practical Conditional Neural Process Via Tractable Dependent Predictions

no code implementations ICLR 2022 Stratis Markou, James Requeima, Wessel Bruinsma, Anna Vaughan, Richard E Turner

Existing approaches which model output dependencies, such as Neural Processes (NPs; Garnelo et al., 2018) or the FullConvGNP (Bruinsma et al., 2021), are either complicated to train or prohibitively expensive.

Decision Making Meta-Learning

Efficient Gaussian Neural Processes for Regression

no code implementations22 Aug 2021 Stratis Markou, James Requeima, Wessel Bruinsma, Richard Turner

Conditional Neural Processes (CNP; Garnelo et al., 2018) are an attractive family of meta-learning models which produce well-calibrated predictions, enable fast inference at test time, and are trainable via a simple maximum likelihood procedure.

Decision Making Meta-Learning +1

The Gaussian Neural Process

1 code implementation pproximateinference AABI Symposium 2021 Wessel P. Bruinsma, James Requeima, Andrew Y. K. Foong, Jonathan Gordon, Richard E. Turner

Neural Processes (NPs; Garnelo et al., 2018a, b) are a rich class of models for meta-learning that map data sets directly to predictive stochastic processes.

Meta-Learning Translation

TaskNorm: Rethinking Batch Normalization for Meta-Learning

2 code implementations ICML 2020 John Bronskill, Jonathan Gordon, James Requeima, Sebastian Nowozin, Richard E. Turner

Modern meta-learning approaches for image classification rely on increasingly deep networks to achieve state-of-the-art performance, making batch normalization an essential component of meta-learning pipelines.

General Classification Image Classification +1

Convolutional Conditional Neural Processes

3 code implementations ICLR 2020 Jonathan Gordon, Wessel P. Bruinsma, Andrew Y. K. Foong, James Requeima, Yann Dubois, Richard E. Turner

We introduce the Convolutional Conditional Neural Process (ConvCNP), a new member of the Neural Process family that models translation equivariance in the data.

Inductive Bias Time Series +3

Fast and Flexible Multi-Task Classification Using Conditional Neural Adaptive Processes

1 code implementation NeurIPS 2019 James Requeima, Jonathan Gordon, John Bronskill, Sebastian Nowozin, Richard E. Turner

We introduce a conditional neural process based approach to the multi-task classification setting for this purpose, and establish connections to the meta-learning and few-shot learning literature.

Active Learning Continual Learning +4

The Gaussian Process Autoregressive Regression Model (GPAR)

3 code implementations20 Feb 2018 James Requeima, Will Tebbutt, Wessel Bruinsma, Richard E. Turner

Multi-output regression models must exploit dependencies between outputs to maximise predictive performance.

Gaussian Processes model +1

Cannot find the paper you are looking for? You can Submit a new open access paper.