Investigating the Effect of Natural Language Explanations on Out-of-Distribution Generalization in Few-shot NLI

EMNLP (insights) 2021  ·  Yangqiaoyu Zhou, Chenhao Tan ·

Although neural models have shown strong performance in datasets such as SNLI, they lack the ability to generalize out-of-distribution (OOD). In this work, we formulate a few-shot learning setup and examine the effects of natural language explanations on OOD generalization. We leverage the templates in the HANS dataset and construct templated natural language explanations for each template. Although generated explanations show competitive BLEU scores against groundtruth explanations, they fail to improve prediction performance. We further show that generated explanations often hallucinate information and miss key elements that indicate the label.

PDF Abstract EMNLP (insights) 2021 PDF EMNLP (insights) 2021 Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here