On the Inconsistency of Bayesian Inference for Misspecified Neural Networks

Grunwald and Van Ommen (2017) show that Bayesian inference for linear regression can be inconsistent under model misspecification. In this paper, we extend their analysis to Bayesian neural networks (BNNs), investigating if they too can be inconsistent under misspecification. We find that BNNs exhibit the same inconsistency when Hamiltonian Monte Carlo is used for posterior inference. However, variational inference changes this behavior. Surprisingly, we find that variational Bayes leads to BNNs that are consistent in the setting studied by Grunwald and Van Ommen (2017). We conjecture that the success of variational Bayes is due to its optimization objective: the evidence lower bound (ELBO) implicitly encourages the posterior approximation to concentrate, mitigating the ill-effects of the misspecification.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here