On the well-posedness of Bayesian inverse problems

26 Feb 2019  ·  Jonas Latz ·

The subject of this article is the introduction of a new concept of well-posedness of Bayesian inverse problems. The conventional concept of (Lipschitz, Hellinger) well-posedness in [Stuart 2010, Acta Numerica 19, pp. 451-559] is difficult to verify in practice and may be inappropriate in some contexts. Our concept simply replaces the Lipschitz continuity of the posterior measure in the Hellinger distance by continuity in an appropriate distance between probability measures. Aside from the Hellinger distance, we investigate well-posedness with respect to weak convergence, the total variation distance, the Wasserstein distance, and also the Kullback--Leibler divergence. We demonstrate that the weakening to continuity is tolerable and that the generalisation to other distances is important. The main results of this article are proofs of well-posedness with respect to some of the aforementioned distances for large classes of Bayesian inverse problems. Here, little or no information about the underlying model is necessary; making these results particularly interesting for practitioners using black-box models. We illustrate our findings with numerical examples motivated from machine learning and image processing.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here