Learning to Explain: Datasets and Models for Identifying Valid Reasoning Chains in Multihop Question-Answering

EMNLP 2020  ·  Harsh Jhamtani, Peter Clark ·

Despite the rapid progress in multihop question-answering (QA), models still have trouble explaining why an answer is correct, with limited explanation training data available to learn from. To address this, we introduce three explanation datasets in which explanations formed from corpus facts are annotated. Our first dataset, eQASC, contains over 98K explanation annotations for the multihop question answering dataset QASC, and is the first that annotates multiple candidate explanations for each answer. The second dataset eQASC-perturbed is constructed by crowd-sourcing perturbations (while preserving their validity) of a subset of explanations in QASC, to test consistency and generalization of explanation prediction models. The third dataset eOBQA is constructed by adding explanation annotations to the OBQA dataset to test generalization of models trained on eQASC. We show that this data can be used to significantly improve explanation quality (+14% absolute F1 over a strong retrieval baseline) using a BERT-based classifier, but still behind the upper bound, offering a new challenge for future research. We also explore a delexicalized chain representation in which repeated noun phrases are replaced by variables, thus turning them into generalized reasoning chains (for example: "X is a Y" AND "Y has Z" IMPLIES "X has Z"). We find that generalized chains maintain performance while also being more robust to certain perturbations.

PDF Abstract EMNLP 2020 PDF EMNLP 2020 Abstract

Datasets


Introduced in the Paper:

eQASC

Used in the Paper:

HotpotQA OpenBookQA QASC QA2D

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Reasoning Chain Explanations eQASC Bert-chain AUC-ROC 88 # 1
Precision@1 57 # 1
Reasoning Chain Explanations eQASC Retrieval AUC-ROC 75 # 3
Precision@1 47 # 3
Reasoning Chain Explanations eQASC Bert-grc AUC-ROC 85 # 2
Precision@1 55 # 2

Methods


No methods listed for this paper. Add relevant methods here