Search Results for author: Henry Moss

Found 7 papers, 2 papers with code

RAIN: Reinforcement Algorithms for Improving Numerical Weather and Climate Models

1 code implementation28 Aug 2024 Pritthijit Nath, Henry Moss, Emily Shuckburgh, Mark Webb

This study explores integrating reinforcement learning (RL) with idealised climate models to address key parameterisation challenges in climate science.

Reinforcement Learning (RL)

Linear combinations of latents in diffusion models: interpolation and beyond

no code implementations16 Aug 2024 Erik Bodin, Henry Moss, Carl Henrik Ek

We propose Combination of Gaussian variables (COG), a novel interpolation method that addresses this, is easy to implement yet matches or improves upon current methods.

Efficient modeling of sub-kilometer surface wind with Gaussian processes and neural networks

no code implementations21 May 2024 Francesco Zanetta, Daniele Nerini, Matteo Buzzi, Henry Moss

Accurately representing surface weather at the sub-kilometer scale is crucial for optimal decision-making in a wide range of applications.

Decision Making Gaussian Processes

Fantasizing with Dual GPs in Bayesian Optimization and Active Learning

no code implementations2 Nov 2022 Paul E. Chang, Prakhar Verma, ST John, Victor Picheny, Henry Moss, Arno Solin

Gaussian processes (GPs) are the main surrogate functions used for sequential modelling such as Bayesian Optimization and Active Learning.

Active Learning Bayesian Optimization +1

Scalable Thompson Sampling using Sparse Gaussian Process Models

no code implementations NeurIPS 2021 Sattar Vakili, Henry Moss, Artem Artemev, Vincent Dutordoir, Victor Picheny

We provide theoretical guarantees and show that the drastic reduction in computational complexity of scalable TS can be enjoyed without loss in the regret performance over the standard TS.

Thompson Sampling

Bayesian Quantile and Expectile Optimisation

no code implementations12 Jan 2020 Victor Picheny, Henry Moss, Léonard Torossian, Nicolas Durrande

In this paper, we propose new variational models for Bayesian quantile and expectile regression that are well-suited for heteroscedastic noise settings.

Bayesian Optimisation Gaussian Processes +1

Using J-K-fold Cross Validation To Reduce Variance When Tuning NLP Models

1 code implementation COLING 2018 Henry Moss, David Leslie, Paul Rayson

K-fold cross validation (CV) is a popular method for estimating the true performance of machine learning models, allowing model selection and parameter tuning.

Document Classification General Classification +4

Cannot find the paper you are looking for? You can Submit a new open access paper.