A Surprisingly Robust Trick for Winograd Schema Challenge

15 May 2019  ·  Vid Kocijan, Ana-Maria Cretu, Oana-Maria Camburu, Yordan Yordanov, Thomas Lukasiewicz ·

The Winograd Schema Challenge (WSC) dataset WSC273 and its inference counterpart WNLI are popular benchmarks for natural language understanding and commonsense reasoning. In this paper, we show that the performance of three language models on WSC273 strongly improves when fine-tuned on a similar pronoun disambiguation problem dataset (denoted WSCR). We additionally generate a large unsupervised WSC-like dataset. By fine-tuning the BERT language model both on the introduced and on the WSCR dataset, we achieve overall accuracies of 72.5% and 74.7% on WSC273 and WNLI, improving the previous state-of-the-art solutions by 8.8% and 9.6%, respectively. Furthermore, our fine-tuned models are also consistently more robust on the "complex" subsets of WSC273, introduced by Trichelair et al. (2018).

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Coreference Resolution Winograd Schema Challenge BERT-base 110M (fine-tuned on WSCR) Accuracy 62.3 # 51
Coreference Resolution Winograd Schema Challenge BERT-large 340M (fine-tuned on WSCR) Accuracy 71.4 # 32
Coreference Resolution Winograd Schema Challenge BERTwiki 340M (fine-tuned on WSCR) Accuracy 72.5 # 30
Coreference Resolution Winograd Schema Challenge BERTwiki 340M (fine-tuned on half of WSCR) Accuracy 70.3 # 34
Natural Language Inference WNLI BERT-base 110M (fine-tuned on WSCR) Accuracy 70.5 # 16
Natural Language Inference WNLI BERT-large 340M (fine-tuned on WSCR) Accuracy 71.9 # 15
Natural Language Inference WNLI BERTwiki 340M (fine-tuned on WSCR) Accuracy 74.7 # 13

Methods