Atlas: Few-shot Learning with Retrieval Augmented Language Models

Large language models have shown impressive few-shot results on a wide range of tasks. However, when knowledge is key for such results, as is the case for tasks such as question answering and fact checking, massive parameter counts to store knowledge seem to be needed. Retrieval augmented models are known to excel at knowledge intensive tasks without the need for as many parameters, but it is unclear whether they work in few-shot settings. In this work we present Atlas, a carefully designed and pre-trained retrieval augmented language model able to learn knowledge intensive tasks with very few training examples. We perform evaluations on a wide range of tasks, including MMLU, KILT and NaturalQuestions, and study the impact of the content of the document index, showing that it can easily be updated. Notably, Atlas reaches over 42% accuracy on Natural Questions using only 64 examples, outperforming a 540B parameters model by 3% despite having 50x fewer parameters.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Multi-task Language Understanding MMLU Atlas (5-shot) Average (%) 47.9 # 68
Question Answering Natural Questions Atlas (full, Wiki-dec-2018 index) EM 64.0 # 1
Question Answering Natural Questions Atlas (few-shot, k=64, Wiki-dec-2021+CC index) EM 42.4 # 15
Question Answering Natural Questions Atlas (few-shot, k=64, Wiki-Dec-2018 index) EM 45.1 # 11
Question Answering Natural Questions Atlas (full, Wiki-dec-2021+CC index) EM 60.4 # 2

Methods


No methods listed for this paper. Add relevant methods here