Rethinking Function-Space Variational Inference in Bayesian Neural Networks

Bayesian neural networks (BNNs) define distributions over functions induced by distributions over parameters. In practice, this model specification makes it difficult to define and use meaningful prior distributions over functions that could aid in training. What's more, previous attempts at defining an explicit function-space variational objective for approximate inference in BNNs require approximations that do not scale to high-dimensional data. We propose a new function-space approach to variational inference in BNNs and derive a tractable variational by linearizing the BNN's posterior predictive distribution about its mean parameters, allowing function-space variational inference to be scaled to large and high-dimensional datasets. We evaluate this approach empirically and show that it leads to models with competitive predictive accuracy and significantly improved predictive uncertainty estimates compared to parameter-space variational inference.

PDF Abstract
No code implementations yet. Submit your code now


  Add Datasets introduced or used in this paper

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.


No methods listed for this paper. Add relevant methods here