Neural Variational Inference and Learning in Belief Networks

31 Jan 2014 Andriy Mnih Karol Gregor

Highly expressive directed latent variable models, such as sigmoid belief networks, are difficult to train on large datasets because exact inference in them is intractable and none of the approximate inference methods that have been applied to them scale well. We propose a fast non-iterative approximate inference method that uses a feedforward network to implement efficient exact sampling from the variational posterior... (read more)

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK USES EXTRA
TRAINING DATA
RESULT BENCHMARK
Latent Variable Models 200k Short Texts for Humor Detection meto 10-20% Mask PSNR 0.987 # 1

Methods used in the Paper


METHOD TYPE
Dense Connections
Feedforward Networks
Feedforward Network
Feedforward Networks