no code implementations • 26 May 2023 • Felix Jimenez, Matthias Katzfuss
For regression tasks, standard Gaussian processes (GPs) provide natural uncertainty quantification, while deep neural networks (DNNs) excel at representation learning.
1 code implementation • 30 Jan 2023 • Jian Cao, Myeongjong Kang, Felix Jimenez, Huiyan Sang, Florian Schafer, Matthias Katzfuss
To achieve scalable and accurate inference for latent Gaussian processes, we propose a variational approximation based on a family of Gaussian distributions whose covariance matrices have sparse inverse Cholesky (SIC) factors.
no code implementations • 2 Mar 2022 • Felix Jimenez, Matthias Katzfuss
We focus on the use of our warped Vecchia GP in trust-region Bayesian optimization via Thompson sampling.