OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization

Recent work has shown that fine-tuning large pre-trained language models on a collection of tasks described via instructions, a.k.a. instruction-tuning, improves their zero and few-shot generalization to unseen tasks. However, there is a limited understanding of the performance trade-offs of different decisions made during the instruction-tuning process. These decisions include the scale and diversity of the instruction-tuning benchmark, different task sampling strategies, fine-tuning with and without demonstrations, training using specialized datasets for reasoning and dialogue, and finally, the fine-tuning objectives themselves. In this paper, we characterize the effect of instruction-tuning decisions on downstream task performance when scaling both model and benchmark sizes. To this end, we create OPT-IML Bench: a large benchmark for Instruction Meta-Learning (IML) of 2000 NLP tasks consolidated into task categories from 8 existing benchmarks, and prepare an evaluation framework to measure three types of model generalizations: to tasks from fully held-out categories, to held-out tasks from seen categories, and to held-out instances from seen tasks. Through the lens of this framework, we first present insights about instruction-tuning decisions as applied to OPT-30B and further exploit these insights to train OPT-IML 30B and 175B, which are instruction-tuned versions of OPT. OPT-IML demonstrates all three generalization abilities at both scales on four different evaluation benchmarks with diverse tasks and input formats -- PromptSource, FLAN, Super-NaturalInstructions, and UnifiedSKG. Not only does it significantly outperform OPT on all benchmarks but is also highly competitive with existing models fine-tuned on each specific benchmark. We release OPT-IML at both scales, together with the OPT-IML Bench evaluation framework.

PDF Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Question Answering BoolQ OPT 1.3B (zero-shot) Accuracy 60.5 # 36
Question Answering BoolQ OPT-IML 175B Accuracy 71.4 # 27
Question Answering BoolQ OPT 175B Accuracy 60.1 # 38
Question Answering BoolQ OPT-IML 30B Accuracy 66.9 # 30
Question Answering BoolQ OPT 30B (zero-shot) Accuracy 64 # 34
Question Answering BoolQ OPT-IML 1.3B (zero-shot) Accuracy 61.5 # 35
Natural Language Inference RTE OPT-IML 175B Accuracy 84.8% # 16
Natural Language Inference RTE OPT 175B Accuracy 60.3% # 46
Natural Language Inference RTE OPT-IML 30B Accuracy 83.8% # 18
Natural Language Inference RTE OPT 30B Accuracy 58.1% # 49
Natural Language Inference RTE OPT-IML 1.3B Accuracy 66.8% # 42
Natural Language Inference RTE OPT 1.3B Accuracy 54.2% # 53