However, models with a task-specific head require a lot of training data, making them susceptible to learning and exploiting dataset-specific superficial cues that do not generalize to other datasets.
We present Semi-Structured Explanations for COPA (COPA-SSE), a new crowdsourced dataset of 9, 747 semi-structured, English common sense explanations for Choice of Plausible Alternatives (COPA) questions.
Here, we propose to explicitly learn a model that does well on both the easy test set with superficial cues and hard test set without superficial cues.
Pretrained language models, such as BERT and RoBERTa, have shown large improvements in the commonsense reasoning benchmark COPA.
Recognizing the implicit link between a claim and a piece of evidence (i. e. warrant) is the key to improving the performance of evidence detection.