Language Models are Unsupervised Multitask Learners

Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset - matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.

PDF

Results from the Paper


 Ranked #1 on Language Modelling on enwik8 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Document Summarization CNN / Daily Mail GPT-2 ROUGE-1 29.34 # 26
ROUGE-2 8.27 # 26
ROUGE-L 26.58 # 26
Language Modelling enwik8 GPT-2 (48 layers, h=1600) Bit per Character (BPC) 0.93 # 1
Number of params 1542M # 1
Language Modelling LAMBADA GPT-2 1.5B (Zero Shot) Accuracy 63.24 # 29
Perplexity 8.63 # 10
Language Modelling One Billion Word GPT-2 PPL 42.16 # 21
Number of params 1.54B # 1
Language Modelling Penn Treebank (Word Level) GPT-2 Test perplexity 35.76 # 3
Params 1542M # 2
Dialogue State Tracking SIMMC2.0 GPT-2 Slot F1 81.7 # 4
Act F1 94.5 # 4
Response Generation SIMMC2.0 GPT-2 BLEU 19.2 # 5
Language Modelling Text8 GPT-2 Bit per Character (BPC) 0.98 # 1
Number of params 1542M # 1
Language Modelling WikiText-103 GPT-2 Large Test perplexity 22.05 # 46
Number of params 774M # 8
Language Modelling WikiText-103 GPT-2 Medium Test perplexity 26.37 # 63
Number of params 355M # 10
Language Modelling WikiText-103 GPT-2 Full Test perplexity 17.48 # 25
Number of params 1542M # 6
Language Modelling WikiText-103 GPT-2 Small Test perplexity 37.50 # 79
Number of params 124M # 39
Language Modelling WikiText-2 GPT-2 (small) Test perplexity 29.41 # 9
Number of params 117M # 7
Language Modelling WikiText-2 GPT-2 (medium) Test perplexity 22.76 # 8
Number of params 345M # 5
Language Modelling WikiText-2 GPT-2 (large) Test perplexity 19.93 # 7
Number of params 762M # 3
Language Modelling WikiText-2 GPT-2 Test perplexity 18.34 # 6
Number of params 1542M # 1
Coreference Resolution Winograd Schema Challenge GPT-2-XL 1.5B Accuracy 70.7 # 33

Methods