The Renyi Gaussian Process: Towards Improved Generalization

15 Oct 2019  ·  Xubo Yue, Raed Kontar ·

We introduce an alternative closed form lower bound on the Gaussian process ($\mathcal{GP}$) likelihood based on the R\'enyi $\alpha$-divergence. This new lower bound can be viewed as a convex combination of the Nystr\"om approximation and the exact $\mathcal{GP}$. The key advantage of this bound, is its capability to control and tune the enforced regularization on the model and thus is a generalization of the traditional variational $\mathcal{GP}$ regression. From a theoretical perspective, we provide the convergence rate and risk bound for inference using our proposed approach. Experiments on real data show that the proposed algorithm may be able to deliver improvement over several $\mathcal{GP}$ inference methods.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods