1 code implementation • 6 Nov 2023 • Arina Odnoblyudova, Çağlar Hızlı, ST John, Andrea Cognolato, Anne Juuti, Simo Särkkä, Kirsi Pietiläinen, Pekka Marttinen
By differentiating treatment components, incorporating their dosages, and sharing statistical information across patients via a hierarchical multi-output Gaussian process, our method improves prediction accuracy over existing approaches, and allows us to interpret the different effects of carbohydrates and fat on the overall glucose response.
1 code implementation • 6 Jul 2023 • Kenza Tazi, Jihao Andreas Lin, Ross Viljoen, Alex Gardner, ST John, Hong Ge, Richard E. Turner
Gaussian Processes (GPs) offer an attractive method for regression over small, structured and correlated datasets.
1 code implementation • 7 Jun 2023 • Rui Li, ST John, Arno Solin
Approximate inference in Gaussian process (GP) models with non-conjugate likelihoods gets entangled with the learning of the model hyperparameters.
no code implementations • 29 Mar 2023 • Organizers Of QueerInAI, :, Anaelia Ovalle, Arjun Subramonian, Ashwin Singh, Claas Voelcker, Danica J. Sutherland, Davide Locatelli, Eva Breznik, Filip Klubička, Hang Yuan, Hetvi J, huan zhang, Jaidev Shriram, Kruno Lehman, Luca Soldaini, Maarten Sap, Marc Peter Deisenroth, Maria Leonor Pacheco, Maria Ryskina, Martin Mundt, Milind Agarwal, Nyx McLean, Pan Xu, A Pranav, Raj Korpan, Ruchira Ray, Sarah Mathew, Sarthak Arora, ST John, Tanvi Anand, Vishakha Agrawal, William Agnew, Yanan Long, Zijie J. Wang, Zeerak Talat, Avijit Ghosh, Nathaniel Dennler, Michael Noseworthy, Sharvani Jha, Emi Baylor, Aditya Joshi, Natalia Y. Bilenko, Andrew McNamara, Raphael Gontijo-Lopes, Alex Markham, Evyn Dǒng, Jackie Kay, Manu Saraswat, Nikhil Vytla, Luke Stark
We present Queer in AI as a case study for community-led participatory design in AI.
no code implementations • 11 Nov 2022 • Rui Li, ST John, Arno Solin
Gaussian process training decomposes into inference of the (approximate) posterior and learning of the hyperparameters.
no code implementations • 2 Nov 2022 • Paul E. Chang, Prakhar Verma, ST John, Victor Picheny, Henry Moss, Arno Solin
Gaussian processes (GPs) are the main surrogate functions used for sequential modelling such as Bayesian Optimization and Active Learning.
no code implementations • 9 Sep 2022 • Çağlar Hızlı, ST John, Anne Juuti, Tuure Saarinen, Kirsi Pietiläinen, Pekka Marttinen
Our model enables the estimation of a treatment policy from observational sequences of treatments and outcomes, and it can predict the interventional and counterfactual progression of the outcome after an intervention on the treatment policy (in contrast with the causal effect of a single treatment).
no code implementations • 16 Nov 2021 • Alexander Nikitin, ST John, Arno Solin, Samuel Kaski
Gaussian processes (GPs) provide a principled and direct approach for inference and learning on graphs.
1 code implementation • 12 Apr 2021 • Vincent Dutordoir, Hugh Salimbeni, Eric Hambro, John McLeod, Felix Leibfried, Artem Artemev, Mark van der Wilk, James Hensman, Marc P. Deisenroth, ST John
GPflux is compatible with and built on top of the Keras deep learning eco-system.
no code implementations • 27 Dec 2020 • Felix Leibfried, Vincent Dutordoir, ST John, Nicolas Durrande
In this context, a convenient choice for approximate inference is variational inference (VI), where the problem of Bayesian inference is cast as an optimization problem -- namely, to maximize a lower bound of the log marginal likelihood.
no code implementations • 9 Mar 2020 • Ayman Boustati, Sattar Vakili, James Hensman, ST John
Approximate inference in complex probabilistic models such as deep Gaussian processes requires the optimisation of doubly stochastic objective functions.
1 code implementation • 2 Mar 2020 • Mark van der Wilk, Vincent Dutordoir, ST John, Artem Artemev, Vincent Adam, James Hensman
One obstacle to the use of Gaussian processes (GPs) in large-scale problems, and as a component in deep learning system, is the need for bespoke derivations and implementations for small variations in the model or inference.
no code implementations • pproximateinference AABI Symposium 2019 • Mark van der Wilk, ST John, Artem Artemev, James Hensman
We present a variational approximation for a wide range of GP models that does not require a matrix inverse to be performed at each optimisation step.
no code implementations • 28 Feb 2019 • Andrés F. López-Lopera, ST John, Nicolas Durrande
We introduce a novel finite approximation of GP-modulated Cox processes where positiveness conditions can be imposed directly on the GP, with no restrictions on the covariance function.
no code implementations • 28 Dec 2018 • Vincent Adam, Nicolas Durrande, ST John
Generalized additive models (GAMs) are a widely used class of models of interest to statisticians as they provide a flexible way to design interpretable models of data beyond linear models.
no code implementations • NeurIPS 2018 • Mark van der Wilk, Matthias Bauer, ST John, James Hensman
Generalising well in supervised learning tasks relies on correctly extrapolating the training data to a large region of the input space.