HellaSwag: Can a Machine Really Finish Your Sentence?

Recent work by Zellers et al. (2018) introduced a new task of commonsense natural language inference: given an event description such as "A woman sits at a piano," a machine must select the most likely followup: "She sets her fingers on the keys." With the introduction of BERT, near human-level performance was reached. Does this mean that machines can perform human level commonsense inference? In this paper, we show that commonsense inference still proves difficult for even state-of-the-art models, by presenting HellaSwag, a new challenge dataset. Though its questions are trivial for humans (>95% accuracy), state-of-the-art models struggle (<48%). We achieve this via Adversarial Filtering (AF), a data collection paradigm wherein a series of discriminators iteratively select an adversarial set of machine-generated wrong answers. AF proves to be surprisingly robust. The key insight is to scale up the length and complexity of the dataset examples towards a critical 'Goldilocks' zone wherein generated text is ridiculous to humans, yet often misclassified by state-of-the-art models. Our construction of HellaSwag, and its resulting difficulty, sheds light on the inner workings of deep pretrained models. More broadly, it suggests a new path forward for NLP research, in which benchmarks co-evolve with the evolving state-of-the-art in an adversarial way, so as to present ever-harder challenges.

PDF Abstract ACL 2019 PDF ACL 2019 Abstract

Datasets


Introduced in the Paper:

HellaSwag

Used in the Paper:

ActivityNet Captions SWAG

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Sentence Completion HellaSwag BERT-Large 340M Accuracy 47.3 # 67
Sentence Completion HellaSwag BERT-Base 110M Accuracy 40.5 # 73
Sentence Completion HellaSwag GPT-1 117M Accuracy 41.7 # 69
Sentence Completion HellaSwag ESIM + ElMo Accuracy 33.3 # 78
Sentence Completion HellaSwag LSTM + BERT-Base Accuracy 36.2 # 76
Sentence Completion HellaSwag LSTM + ElMo Accuracy 31.4 # 82
Sentence Completion HellaSwag LSTM + GloVe Accuracy 31.7 # 80
Sentence Completion HellaSwag fastText Accuracy 31.6 # 81
Sentence Completion HellaSwag Random chance baseline Accuracy 25 # 85

Methods