Variational Gaussian Process Models without Matrix Inverses

Large matrix inversions have often been cited as a major impediment to scaling Gaussian process (GP) models. With the use of GPs as building blocks for ever more sophisticated Bayesian deep learning models, removing these impediments is a necessary step for achieving large scale results. We present a variational approximation for a wide range of GP models that does not require a matrix inverse to be performed at each optimisation step. Our bound instead directly parameterises a free matrix, which is an additional variational parameter. At the local maxima of the bound, this matrix is equal to the matrix inverse. We prove that our bound gives the same guarantees as earlier variational approximations. We demonstrate some beneficial properties of the bound experimentally, although significant wall clock time speed improvements will require future improvements in optimisation and implementation.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here