Paper

Bayesian posterior repartitioning for nested sampling

Priors in Bayesian analyses often encode informative domain knowledge that can be useful in making the inference process more efficient. Occasionally, however, priors may be unrepresentative of the parameter values for a given dataset, which can result in inefficient parameter space exploration, or even incorrect inferences, particularly for nested sampling (NS) algorithms. Simply broadening the prior in such cases may be inappropriate or impossible in some applications. Hence our previous solution to this problem, known as posterior repartitioning (PR), redefines the prior and likelihood while keeping their product fixed, so that the posterior inferences and evidence estimates remain unchanged, but the efficiency of the NS process is significantly increased. In its most practical form, PR raises the prior to some power beta, which is introduced as an auxiliary variable that must be determined on a case-by-case basis, usually by lowering beta from unity according to some pre-defined `annealing schedule' until the resulting inferences converge to a consistent solution. Here we present a very simple yet powerful alternative Bayesian approach, in which beta is instead treated as a hyperparameter that is inferred from the data alongside the original parameters of the problem, and then marginalised over to obtain the final inference. We show through numerical examples that this Bayesian PR (BPR) method provides a very robust, self-adapting and computationally efficient `hands-off' solution to the problem of unrepresentative priors in Bayesian inference using NS. Moreover, unlike the original PR method, we show that even for representative priors BPR has a negligible computational overhead relative to standard nesting sampling, which suggests that it should be used as the default in all NS analyses.

Results in Papers With Code
(↓ scroll down to see all results)