A Fully Attention-Based Information Retriever

Recurrent neural networks are now the state-of-the-art in natural language processing because they can build rich contextual representations and process texts of arbitrary length. However, recent developments on attention mechanisms have equipped feedforward networks with similar capabilities, hence enabling faster computations due to the increase in the number of operations that can be parallelized. We explore this new type of architecture in the domain of question-answering and propose a novel approach that we call Fully Attention Based Information Retriever (FABIR). We show that FABIR achieves competitive results in the Stanford Question Answering Dataset (SQuAD) while having fewer parameters and being faster at both learning and inference than rival methods.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Question Answering SQuAD1.1 FABIR EM 67.744 # 172
F1 77.605 # 172
Question Answering SQuAD1.1 dev FABIR EM 65.1 # 46
F1 75.6 # 47

Methods


No methods listed for this paper. Add relevant methods here