Learning to Draw Samples: With Application to Amortized MLE for Generative Adversarial Learning

6 Nov 2016  ·  Dilin Wang, Qiang Liu ·

We propose a simple algorithm to train stochastic neural networks to draw samples from given target distributions for probabilistic inference. Our method is based on iteratively adjusting the neural network parameters so that the output changes along a Stein variational gradient that maximumly decreases the KL divergence with the target distribution. Our method works for any target distribution specified by their unnormalized density function, and can train any black-box architectures that are differentiable in terms of the parameters we want to adapt. As an application of our method, we propose an amortized MLE algorithm for training deep energy model, where a neural sampler is adaptively trained to approximate the likelihood function. Our method mimics an adversarial game between the deep energy model and the neural sampler, and obtains realistic-looking images competitive with the state-of-the-art results.

PDF Abstract

Results from the Paper


Ranked #19 on Conditional Image Generation on CIFAR-10 (Inception score metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Conditional Image Generation CIFAR-10 SteinGAN Inception score 6.35 # 19

Methods


No methods listed for this paper. Add relevant methods here