e-SNLI: Natural Language Inference with Natural Language Explanations

In order for machine learning to garner widespread public adoption, models must be able to provide interpretable and robust explanations for their decisions, as well as learn from human-provided explanations at train time. In this work, we extend the Stanford Natural Language Inference dataset with an additional layer of human-annotated natural language explanations of the entailment relations. We further implement models that incorporate these explanations into their training process and output them at test time. We show how our corpus of explanations, which we call e-SNLI, can be used for various goals, such as obtaining full sentence justifications of a model's decisions, improving universal sentence representations and transferring to out-of-domain NLI datasets. Our dataset thus opens up a range of research directions for using natural language explanations, both for improving models and for asserting their trust.

PDF Abstract NeurIPS 2018 PDF NeurIPS 2018 Abstract

Datasets


Introduced in the Paper:

e-SNLI

Used in the Paper:

MultiNLI SNLI SICK

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Natural Language Inference e-SNLI ExplainThenPredictAttention (e-InferSent Bi-LSTM + Attention) BLEU 27.58 # 1
Accuracy 81.71 # 1

Methods


No methods listed for this paper. Add relevant methods here