Cloze-driven Pretraining of Self-attention Networks

We present a new approach for pretraining a bi-directional transformer model that provides significant performance gains across a variety of language understanding problems. Our model solves a cloze-style word reconstruction task, where each word is ablated and must be predicted given the rest of the text. Experiments demonstrate large performance gains on GLUE and new state of the art results on NER as well as constituency parsing benchmarks, consistent with the concurrently introduced BERT model. We also present a detailed analysis of a number of factors that contribute to effective pretraining, including data domain and size, model capacity, and variations on the cloze objective.

PDF Abstract IJCNLP 2019 PDF IJCNLP 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Named Entity Recognition (NER) CoNLL 2003 (English) CNN Large + fine-tune F1 93.5 # 15
Constituency Parsing Penn Treebank CNN Large + fine-tune F1 score 95.6 # 10
Sentiment Analysis SST-2 Binary classification CNN Large Accuracy 94.6 # 32

Methods