Neural Variational Inference for Text Processing

19 Nov 2015Yishu Miao • Lei Yu • Phil Blunsom

In this paper we introduce a generic variational inference framework for generative and conditional models of text. We validate this framework on two very different text modelling applications, generative document modelling and supervised question answering. The neural answer selection model employs a stochastic representation layer within an attention mechanism to extract the semantics between a question and answer pair.

Full paper

Evaluation


Task Dataset Model Metric name Metric value Global rank Compare
Question Answering QASent LSTM MAP 0.6436 # 5
Question Answering QASent LSTM MRR 0.7235 # 5
Question Answering QASent LSTM (lexical overlap + dist output) MAP 0.7228 # 2
Question Answering QASent LSTM (lexical overlap + dist output) MRR 0.7986 # 2
Question Answering QASent Attentive LSTM MAP 0.7339 # 1
Question Answering QASent Attentive LSTM MRR 0.8117 # 1
Question Answering WikiQA Attentive LSTM MAP 0.6886 # 5
Question Answering WikiQA Attentive LSTM MRR 0.7069 # 5
Question Answering WikiQA LSTM (lexical overlap + dist output) MAP 0.6820 # 7
Question Answering WikiQA LSTM (lexical overlap + dist output) MRR 0.6988 # 7
Question Answering WikiQA LSTM MAP 0.6552 # 9
Question Answering WikiQA LSTM MRR 0.6747 # 9