Bayesian Learning with Wasserstein Barycenters

28 May 2018  ·  Julio Backhoff-Veraguas, Joaquin Fontbona, Gonzalo Rios, Felipe Tobar ·

Based on recent developments in optimal transport theory, we propose a novel model-selection strategy for Bayesian learning. More precisely, the goal of this paper is to introduce the Wasserstein barycenter of the posterior law on models, as a Bayesian predictive posterior, alternative to classical choices such as the maximum a posteriori and the model average Bayesian estimators. After formulating the general problem of Bayesian model selection in a common, parameter-free framework, we exhibit conditions granting the existence and statistical consistency of this estimator, discuss some of its general and specific properties, and provide insight into its theoretical advantages. Furthermore, we illustrate how it can be computed using the theoretical stochastic gradient descent (SGD) algorithm in Wasserstein space introduced in a companion paper arXiv:2201.04232v2 [math.OC] , and provide a numerical example for experimental validation of the proposed method.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here