Variational Inference
753 papers with code • 1 benchmarks • 5 datasets
Fitting approximate posteriors with variational inference transforms the inference problem into an optimization problem, where the goal is (typically) to optimize the evidence lower bound (ELBO) on the log likelihood of the data.
Libraries
Use these libraries to find Variational Inference models and implementationsLatest papers
An Ordering of Divergences for Variational Inference with Factorized Gaussian Approximations
Our analysis covers the KL divergence, the R\'enyi divergences, and a score-based divergence that compares $\nabla\log p$ and $\nabla\log q$.
Neural Markov Random Field for Stereo Matching
Stereo matching is a core task for many computer vision and robotics applications.
Sequential Monte Carlo for Inclusive KL Minimization in Amortized Variational Inference
As an alternative, we propose SMC-Wake, a procedure for fitting an amortized variational approximation that uses likelihood-tempered sequential Monte Carlo samplers to estimate the gradient of the inclusive KL divergence.
An Efficient Difference-of-Convex Solver for Privacy Funnel
The proposed DC separation results in a closed-form update equation, which allows straightforward application to both known and unknown distribution settings.
Stable Training of Normalizing Flows for High-dimensional Variational Inference
However, in practice, training deep normalizing flows for approximating high-dimensional posterior distributions is often infeasible due to the high variance of the stochastic gradients.
Batch and match: black-box variational inference with a score-based divergence
We analyze the convergence of BaM when the target distribution is Gaussian, and we prove that in the limit of infinite batch size the variational parameter updates converge exponentially quickly to the target mean and covariance.
BlackJAX: Composable Bayesian inference in JAX
BlackJAX is a library implementing sampling and variational inference algorithms commonly used in Bayesian computation.
Training Bayesian Neural Networks with Sparse Subspace Variational Inference
Bayesian neural networks (BNNs) offer uncertainty quantification but come with the downside of substantially increased training and inference costs.
The VampPrior Mixture Model
Current clustering priors for deep latent variable models (DLVMs) require defining the number of clusters a-priori and are susceptible to poor initializations.
Bayesian Deep Learning for Remaining Useful Life Estimation via Stein Variational Gradient Descent
In particular, we show through experimental studies on simulated run-to-failure turbofan engine degradation data that Bayesian deep learning models trained via Stein variational gradient descent consistently outperform with respect to convergence speed and predictive performance both the same models trained via parametric variational inference and their frequentist counterparts trained via backpropagation.