Neural Natural Language Inference Models Enhanced with External Knowledge

Modeling natural language inference is a very challenging task. With the availability of large annotated data, it has recently become feasible to train complex models such as neural-network-based inference models, which have shown to achieve the state-of-the-art performance. Although there exist relatively large annotated data, can machines learn all knowledge needed to perform natural language inference (NLI) from these data? If not, how can neural-network-based NLI models benefit from external knowledge and how to build NLI models to leverage it? In this paper, we enrich the state-of-the-art neural natural language inference models with external knowledge. We demonstrate that the proposed models improve neural NLI models to achieve the state-of-the-art performance on the SNLI and MultiNLI datasets.

PDF Abstract ACL 2018 PDF ACL 2018 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Natural Language Inference SNLI KIM Ensemble % Test Accuracy 89.1 # 20
% Train Accuracy 93.6 # 20
Parameters 43m # 4
Natural Language Inference SNLI KIM % Test Accuracy 88.6 # 30
% Train Accuracy 94.1 # 17
Parameters 4.3m # 4

Methods


No methods listed for this paper. Add relevant methods here