Manifold Optimization Assisted Gaussian Variational Approximation

11 Feb 2019  ·  Bingxin Zhou, Junbin Gao, Minh-Ngoc Tran, Richard Gerlach ·

Gaussian variational approximation is a popular methodology to approximate posterior distributions in Bayesian inference especially in high dimensional and large data settings. To control the computational cost while being able to capture the correlations among the variables, the low rank plus diagonal structure was introduced in the previous literature for the Gaussian covariance matrix. For a specific Bayesian learning task, the uniqueness of the solution is usually ensured by imposing stringent constraints on the parameterized covariance matrix, which could break down during the optimization process. In this paper, we consider two special covariance structures by applying the Stiefel manifold and Grassmann manifold constraints, to address the optimization difficulty in such factorization architectures. To speed up the updating process with minimum hyperparameter-tuning efforts, we design two new schemes of Riemannian stochastic gradient descent methods and compare them with other existing methods of optimizing on manifolds. In addition to fixing the identification issue, results from both simulation and empirical experiments prove the ability of the proposed methods of obtaining competitive accuracy and comparable converge speed in both high-dimensional and large-scale learning tasks.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods