Improving Language Understanding by Generative Pre-Training

Preprint 2018 Alec RadfordKarthik NarasimhanTim SalimansIlya Sutskever

Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately... (read more)

PDF

Results from the Paper


TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK LEADERBOARD
Natural Language Inference SNLI Fine-Tuned LM-Pretrained Transformer % Test Accuracy 89.9 # 6
% Train Accuracy 96.6 # 5
Parameters 85m # 2