Information Newton's flow: second-order optimization method in probability space

13 Jan 2020  ·  Yifei Wang, Wuchen Li ·

We introduce a framework for Newton's flows in probability space with information metrics, named information Newton's flows. Here two information metrics are considered, including both the Fisher-Rao metric and the Wasserstein-2 metric. A known fact is that overdamped Langevin dynamics correspond to Wasserstein gradient flows of Kullback-Leibler (KL) divergence. Extending this fact to Wasserstein Newton's flows, we derive Newton's Langevin dynamics. We provide examples of Newton's Langevin dynamics in both one-dimensional space and Gaussian families. For the numerical implementation, we design sampling efficient variational methods in affine models and reproducing kernel Hilbert space (RKHS) to approximate Wasserstein Newton's directions. We also establish convergence results of the proposed information Newton's method with approximated directions. Several numerical examples from Bayesian sampling problems are shown to demonstrate the effectiveness of the proposed method.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods