Efficient unsupervised training and inference in deep generative models
remains a challenging problem. One basic approach, called Helmholtz machine,
involves training a top-down directed generative model together with a
bottom-up auxiliary model used for approximate inference...
indicate that better generative models can be obtained with better approximate
inference procedures. Instead of improving the inference procedure, we here
propose a new model which guarantees that the top-down and bottom-up
distributions can efficiently invert each other. We achieve this by
interpreting both the top-down and the bottom-up directed models as approximate
inference distributions and by defining the model distribution to be the
geometric mean of these two. We present a lower-bound for the likelihood of
this model and we show that optimizing this bound regularizes the model so that
the Bhattacharyya distance between the bottom-up and top-down approximate
distributions is minimized. This approach results in state of the art
generative models which prefer significantly deeper architectures while it
allows for orders of magnitude more efficient approximate inference.