Scaling language models with more data, compute and parameters has driven significant progress in natural language processing. For example, thanks to scaling, GPT-3 was able to achieve strong results on in-context learning tasks. However, training these large dense models requires significant amounts of computing resources. In this paper, we propose and develop a family of language models named GLaM (Generalist Language Model), which uses a sparsely activated mixture-of-experts architecture to scale the model capacity while also incurring substantially less training cost compared to dense variants. The largest GLaM has 1.2 trillion parameters, which is approximately 7x larger than GPT-3. It consumes only 1/3 of the energy used to train GPT-3 and requires half of the computation flops for inference, while still achieving better overall zero-shot and one-shot performance across 29 NLP tasks.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Common Sense Reasoning ARC (Challenge) GLaM 64B/64E (1 shot) Accuracy 48.2 # 33
Common Sense Reasoning ARC (Challenge) GLaM 64B/64E (0 shot) Accuracy 50.3 # 30
Common Sense Reasoning ARC (Easy) GLaM (64B/64E) (5-shot) Accuracy 74.8 # 20
Common Sense Reasoning ARC (Easy) GLaM 64B/64E (0-shot) Accuracy 68.0 # 36
Language Modelling LAMBADA GLaM 62B/64E (One-Shot) Accuracy 80.9 # 10
Question Answering Natural Questions GLaM 62B/64E (Few-Shot) EM 32.5 # 24
Question Answering TriviaQA GLaM 62B/64E (Few-shot) EM 75.8 # 13
Question Answering TriviaQA GLaM 62B/64E (Zero-shot) EM 71.3 # 22
Question Answering TriviaQA GLaM 62B/64E (One-shot) EM 75.8 # 13
Question Answering WebQuestions GLaM 62B/64E (Zero-Shot) EM 15.5 # 16

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Uses Extra
Training Data
Source Paper Compare
Question Answering Natural Questions GLaM 62B/64E (One-Shot) EM 26.3 # 31
Question Answering Natural Questions GLaM 62B/64E (Zero-Shot) EM 24.7 # 35

Methods