Search Results for author: Prakhar Verma

Found 7 papers, 5 papers with code

Memory-Based Dual Gaussian Processes for Sequential Learning

1 code implementation6 Jun 2023 Paul E. Chang, Prakhar Verma, S. T. John, Arno Solin, Mohammad Emtiyaz Khan

Sequential learning with Gaussian processes (GPs) is challenging when access to past data is limited, for example, in continual and active learning.

Active Learning Bayesian Optimization +2

Variational Gaussian Process Diffusion Processes

1 code implementation3 Jun 2023 Prakhar Verma, Vincent Adam, Arno Solin

Diffusion processes are a class of stochastic differential equations (SDEs) providing a rich family of expressive models that arise naturally in dynamic modelling tasks.

Variational Inference

PriorCVAE: scalable MCMC parameter inference with Bayesian deep generative modelling

2 code implementations9 Apr 2023 Elizaveta Semenova, Prakhar Verma, Max Cairney-Leeming, Arno Solin, Samir Bhatt, Seth Flaxman

Recent advances have shown that GP priors, or their finite realisations, can be encoded using deep generative models such as variational autoencoders (VAEs).

Bayesian Inference Gaussian Processes

Fantasizing with Dual GPs in Bayesian Optimization and Active Learning

no code implementations2 Nov 2022 Paul E. Chang, Prakhar Verma, ST John, Victor Picheny, Henry Moss, Arno Solin

Gaussian processes (GPs) are the main surrogate functions used for sequential modelling such as Bayesian Optimization and Active Learning.

Active Learning Bayesian Optimization +1

Scalable Inference in SDEs by Direct Matching of the Fokker-Planck-Kolmogorov Equation

1 code implementation NeurIPS 2021 Arno Solin, Ella Tamir, Prakhar Verma

Simulation-based techniques such as variants of stochastic Runge-Kutta are the de facto approach for inference with stochastic differential equations (SDEs) in machine learning.

Sparse Gaussian Processes for Stochastic Differential Equations

no code implementations NeurIPS Workshop DLDE 2021 Prakhar Verma, Vincent Adam, Arno Solin

We frame the problem of learning stochastic differential equations (SDEs) from noisy observations as an inference problem and aim to maximize the marginal likelihood of the observations in a joint model of the latent paths and the noisy observations.

Gaussian Processes Variational Inference

Scalable Inference in SDEs by Direct Matching of the Fokker–Planck–Kolmogorov Equation

1 code implementation NeurIPS 2021 Arno Solin, Ella Maija Tamir, Prakhar Verma

Simulation-based techniques such as variants of stochastic Runge–Kutta are the de facto approach for inference with stochastic differential equations (SDEs) in machine learning.

Cannot find the paper you are looking for? You can Submit a new open access paper.