Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering

25 Jul 2017Yi Tay • Luu Anh Tuan • Siu Cheung Hui

The dominant neural architectures in question answer retrieval are based on recurrent or convolutional encoders configured with complex word matching layers. Given that recent architectural innovations are mostly new word interaction layers or attention-based matching mechanisms, it seems to be a well-established fact that these components are mandatory for good performance. As such, this paper tackles the question of whether it is possible to achieve competitive performance with simple neural architectures.

Full paper

Evaluation


Task Dataset Model Metric name Metric value Global rank Compare
Question Answering SemEvalCQA HyperQA [email protected] 0.809 # 1
Question Answering SemEvalCQA HyperQA MAP 0.795 # 1
Question Answering TrecQA HyperQA MAP 0.770 # 1
Question Answering TrecQA HyperQA MRR 0.825 # 1
Question Answering WikiQA HyperQA MAP 0.712 # 1
Question Answering WikiQA HyperQA MRR 0.727 # 1
Question Answering YahooCQA CNN [email protected] 0.413 # 5
Question Answering YahooCQA CNN MRR 0.632 # 5
Question Answering YahooCQA HyperQA [email protected] 0.683 # 1
Question Answering YahooCQA HyperQA MRR 0.801 # 1
Question Answering YahooCQA LSTM [email protected] 0.465 # 4
Question Answering YahooCQA LSTM MRR 0.669 # 4