Cloze-driven Pretraining of Self-attention Networks

19 Mar 2019Alexei BaevskiSergey EdunovYinhan LiuLuke ZettlemoyerMichael Auli

We present a new approach for pretraining a bi-directional transformer model that provides significant performance gains across a variety of language understanding problems. Our model solves a cloze-style word reconstruction task, where each word is ablated and must be predicted given the rest of the text... (read more)

PDF Abstract

Evaluation results from the paper


Task Dataset Model Metric name Metric value Global rank Uses extra
training data
Compare
Named Entity Recognition CoNLL 2003 (English) CNN Large + fine-tune F1 93.5 # 1
Constituency Parsing Penn Treebank CNN Large + fine-tune F1 score 95.6 # 1
Sentiment Analysis SST-2 Binary classification CNN Large Accuracy 94.6 # 6