Text Embeddings by Weakly-Supervised Contrastive Pre-training

This paper presents E5, a family of state-of-the-art text embeddings that transfer well to a wide range of tasks. The model is trained in a contrastive manner with weak supervision signals from our curated large-scale text pair dataset (called CCPairs). E5 can be readily used as a general-purpose embedding model for any tasks requiring a single-vector representation of texts such as retrieval, clustering, and classification, achieving strong performance in both zero-shot and fine-tuned settings. We conduct extensive evaluations on 56 datasets from the BEIR and MTEB benchmarks. For zero-shot settings, E5 is the first model that outperforms the strong BM25 baseline on the BEIR retrieval benchmark without using any labeled data. When fine-tuned, E5 obtains the best results on the MTEB benchmark, beating existing embedding models with 40x more parameters.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Only Connect Walls Dataset Task 1 (Grouping) OCW E5 (BASE) Wasserstein Distance (WD) 83.8 ± .6 # 11
# Correct Groups 89 ± 6 # 12
Fowlkes Mallows Score (FMS) 33.1 ± .3 # 11
Adjusted Rand Index (ARI) 16.3 ± .4 # 11
Adjusted Mutual Information (AMI) 19.5 ± .4 # 11
# Solved Walls 1 ± 0 # 9
Only Connect Walls Dataset Task 1 (Grouping) OCW E5 (LARGE) Wasserstein Distance (WD) 84.4 ± .7 # 13
# Correct Groups 76 ± 5 # 14
Fowlkes Mallows Score (FMS) 32.3 ± .4 # 12
Adjusted Rand Index (ARI) 15.4 ± .5 # 12
Adjusted Mutual Information (AMI) 18.5 ± .6 # 12
# Solved Walls 0 ± 0 # 10

Methods


No methods listed for this paper. Add relevant methods here