484 papers with code • 0 benchmarks • 4 datasets
Fitting approximate posteriors with variational inference transforms the inference problem into an optimization problem, where the goal is (typically) to optimize the evidence lower bound (ELBO) on the log likelihood of the data.
Highly expressive directed latent variable models, such as sigmoid belief networks, are difficult to train on large datasets because exact inference in them is intractable and none of the approximate inference methods that have been applied to them scale well.
Ranked #1 on Latent Variable Models on 200k Short Texts for Humor Detection (using extra training data)
Motivated by this, we consider the sampler-induced distribution as the model of interest and maximize the likelihood of this model.
Hamiltonian Monte Carlo is a powerful algorithm for sampling from difficult-to-normalize posterior distributions.
The posterior over Bayesian neural network (BNN) parameters is extremely high-dimensional and non-convex.
In this paper, we propose the "adversarial autoencoder" (AAE), which is a probabilistic autoencoder that uses the recently proposed generative adversarial networks (GAN) to perform variational inference by matching the aggregated posterior of the hidden code vector of the autoencoder with an arbitrary prior distribution.
Ranked #5 on Unsupervised Image Classification on MNIST
We propose a general purpose variational inference algorithm that forms a natural counterpart of gradient descent for optimization.