Specialized Language Models with Cheap Inference from Limited Domain Data

2 Feb 2024  ·  David Grangier, Angelos Katharopoulos, Pierre Ablin, Awni Hannun ·

Large language models have emerged as a versatile tool but are challenging to apply to tasks lacking large inference budgets and large in-domain training sets. This work formalizes these constraints and distinguishes four important variables: the pretraining budget (for training before the target domain is known), the specialization budget (for training after the target domain is known), the inference budget, and the in-domain training set size. Across these settings, we compare different approaches from the machine learning literature. Limited by inference cost, we find better alternatives to the standard practice of training very large vanilla transformer models. In particular, we show that hyper-networks and mixture of experts have better perplexity for large pretraining budgets, while small models trained on importance sampled datasets are attractive for large specialization budgets.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


 Ranked #1 on Language Modelling on The Pile (Test perplexity metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Language Modelling The Pile Smaller Transformer 126M (fine-tuned) Test perplexity 12 # 6
Language Modelling The Pile Smaller Transformer 126M (pre-trained) Test perplexity 33 # 12
Language Modelling The Pile Larger Transformer 771M (fine-tuned) Test perplexity 10 # 1
Language Modelling The Pile Larger Transformer 771M (pre-trained) Test perplexity 28.1 # 10

Methods


No methods listed for this paper. Add relevant methods here