Can Foundation Models Wrangle Your Data?

20 May 2022  ·  Avanika Narayan, Ines Chami, Laurel Orr, Simran Arora, Christopher Ré ·

Foundation Models (FMs) are models trained on large corpora of data that, at very large scale, can generalize to new tasks without any task-specific finetuning. As these models continue to grow in size, innovations continue to push the boundaries of what these models can do on language and image tasks. This paper aims to understand an underexplored area of FMs: classical data tasks like cleaning and integration. As a proof-of-concept, we cast five data cleaning and integration tasks as prompting tasks and evaluate the performance of FMs on these tasks. We find that large FMs generalize and achieve SoTA performance on data cleaning and integration tasks, even though they are not trained for these data tasks. We identify specific research challenges and opportunities that these models present, including challenges with private and domain specific data, and opportunities to make data management systems more accessible to non-experts. We make our code and experiments publicly available at: https://github.com/HazyResearch/fm_data_tasks.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Entity Resolution Amazon-Google text-davinci-002_fewshot-10 F1 (%) 63.50 # 10
Entity Resolution Amazon-Google text-davinci-002_zeroshot F1 (%) 54.30 # 11

Methods


No methods listed for this paper. Add relevant methods here