Generating Datasets with Pretrained Language Models

EMNLP 2021  ·  Timo Schick, Hinrich Schütze ·

To obtain high-quality sentence embeddings from pretrained language models (PLMs), they must either be augmented with additional pretraining objectives or finetuned on a large set of labeled text pairs. While the latter approach typically outperforms the former, it requires great human effort to generate suitable datasets of sufficient size. In this paper, we show how PLMs can be leveraged to obtain high-quality sentence embeddings without the need for labeled data, finetuning or modifications to the pretraining objective: We utilize the generative abilities of large and high-performing PLMs to generate entire datasets of labeled text pairs from scratch, which we then use for finetuning much smaller and more efficient models. Our fully unsupervised approach outperforms strong baselines on several semantic textual similarity datasets.

PDF Abstract EMNLP 2021 PDF EMNLP 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semantic Textual Similarity SICK Dino (STSb/̄🦕) Spearman Correlation 0.6809 # 18
Semantic Textual Similarity SICK Dino (STS/̄🦕) Spearman Correlation 0.7426 # 8
Semantic Textual Similarity STS12 Dino (STSb/̄🦕) Spearman Correlation 0.7027 # 15
Semantic Textual Similarity STS13 Dino (STSb/̄🦕) Spearman Correlation 0.8126 # 19
Semantic Textual Similarity STS14 Dino (STSb/̄🦕) Spearman Correlation 0.7125 # 19
Semantic Textual Similarity STS15 Dino (STSb/) Spearman Correlation 0.8049 # 18
Semantic Textual Similarity STS16 Dino (STSb/̄🦕) Spearman Correlation 0.7718 # 17
Semantic Textual Similarity STS Benchmark Dino (STS/̄🦕) Spearman Correlation 0.7651 # 36
Semantic Textual Similarity STS Benchmark Dino (STSb/̄🦕) Spearman Correlation 0.7782 # 33

Methods


No methods listed for this paper. Add relevant methods here