no code implementations • 15 Apr 2023 • Aditya Ravuri, Francisco Vargas, Vidhi Lalchand, Neil D. Lawrence
Dimensionality reduction (DR) algorithms compress high-dimensional data into a lower dimensional representation while preserving important features of the data.
no code implementations • 4 Nov 2022 • Vidhi Lalchand, Wessel P. Bruinsma, David R. Burt, Carl E. Rasmussen
In this work we propose an algorithm for sparse Gaussian process regression which leverages MCMC to sample from the hyperparameter posterior within the variational inducing point framework of Titsias (2009).
1 code implementation • 14 Sep 2022 • Vidhi Lalchand, Aditya Ravuri, Emma Dann, Natsuhiko Kumasaka, Dinithi Sumanaweera, Rik G. H. Lindeboom, Shaista Madad, Sarah A. Teichmann, Neil D. Lawrence
Single-cell RNA-seq datasets are growing in size and complexity, enabling the study of cellular composition changes in various biological/clinical contexts.
1 code implementation • 11 Sep 2022 • Vidhi Lalchand, Kenza Tazi, Talay M. Cheema, Richard E. Turner, Scott Hosking
We account for the spatial variation in precipitation with a non-stationary Gibbs kernel parameterised with an input dependent lengthscale.
no code implementations • 25 Feb 2022 • Vidhi Lalchand, Aditya Ravuri, Neil D. Lawrence
We show how this framework is compatible with different latent variable formulations and perform experiments to compare a suite of models.
1 code implementation • NeurIPS 2021 • Fergus Simpson, Ian Davies, Vidhi Lalchand, Alessandro Vullo, Nicolas Durrande, Carl Rasmussen
Kernel selection plays a central role in determining the performance of Gaussian Process (GP) models, as the chosen kernel determines both the inductive biases and prior support of functions under the GP prior.
no code implementations • pproximateinference AABI Symposium 2021 • Vidhi Lalchand, Aditya Ravuri, Neil D Lawrence
The Bayesian incarnation of the GPLVM uses a variational framework, where the posterior over all unknown quantities is approximated by a well-behaved variational family, a factorised Gaussian.
no code implementations • pproximateinference AABI Symposium 2021 • Fergus Simpson, Vidhi Lalchand, Carl Edward Rasmussen
Gaussian Process (GPs) models are a rich distribution over functions with inductive biases controlled by a kernel function.
1 code implementation • NeurIPS 2021 • Fergus Simpson, Vidhi Lalchand, Carl Edward Rasmussen
Learning occurs through the optimisation of kernel hyperparameters using the marginal likelihood as the objective.
no code implementations • 19 Jan 2020 • Vidhi Lalchand
The algorithm proposed in this thesis targets a challenging classification problem in high energy physics - that of improving the statistical significance of the Higgs discovery.
no code implementations • 16 Jan 2020 • Vidhi Lalchand
While the decay of Higgs to a pair of tau leptons was established in 2018 (CMS collaboration et al., 2017) at the 4. 9$\sigma$ significance based on the 2016 data taking period, the 2014 public data set continues to serve as a benchmark data set to test the performance of supervised classification schemes.
1 code implementation • pproximateinference AABI Symposium 2019 • Vidhi Lalchand, Carl Edward Rasmussen
An alternative learning procedure is to infer the posterior over hyperparameters in a hierarchical specification of GPs we call \textit{Fully Bayesian Gaussian Process Regression} (GPR).
no code implementations • 17 Nov 2018 • Vidhi Lalchand, A. C. Faul
In this paper we present a framework for GP training with sequential selection of training data points using an intuitive selection metric.