Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations

In NLP, a large volume of tasks involve pairwise comparison between two sequences (e.g. sentence similarity and paraphrase identification). Predominantly, two formulations are used for sentence-pair tasks: bi-encoders and cross-encoders. Bi-encoders produce fixed-dimensional sentence representations and are computationally efficient, however, they usually underperform cross-encoders. Cross-encoders can leverage their attention heads to exploit inter-sentence interactions for better performance but they require task fine-tuning and are computationally more expensive. In this paper, we present a completely unsupervised sentence representation model termed as Trans-Encoder that combines the two learning paradigms into an iterative joint framework to simultaneously learn enhanced bi- and cross-encoders. Specifically, on top of a pre-trained Language Model (PLM), we start with converting it to an unsupervised bi-encoder, and then alternate between the bi- and cross-encoder task formulations. In each alternation, one task formulation will produce pseudo-labels which are used as learning signals for the other task formulation. We then propose an extension to conduct such self-distillation approach on multiple PLMs in parallel and use the average of their pseudo-labels for mutual-distillation. Trans-Encoder creates, to the best of our knowledge, the first completely unsupervised cross-encoder and also a state-of-the-art unsupervised bi-encoder for sentence similarity. Both the bi-encoder and cross-encoder formulations of Trans-Encoder outperform recently proposed state-of-the-art unsupervised sentence encoders such as Mirror-BERT and SimCSE by up to 5% on the sentence similarity benchmarks.

PDF Abstract ICLR 2022 PDF ICLR 2022 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Semantic Textual Similarity SICK Trans-Encoder-RoBERTa-large-cross (unsup.) Spearman Correlation 0.7163 # 13
Semantic Textual Similarity SICK Trans-Encoder-BERT-base-cross (unsup.) Spearman Correlation 0.6952 # 17
Semantic Textual Similarity SICK Trans-Encoder-BERT-large-cross (unsup.) Spearman Correlation 0.7192 # 12
Semantic Textual Similarity SICK Trans-Encoder-BERT-base-bi (unsup.) Spearman Correlation 0.7276 # 11
Semantic Textual Similarity SICK Trans-Encoder-BERT-large-bi (unsup.) Spearman Correlation 0.7133 # 14
Semantic Textual Similarity STS12 Trans-Encoder-BERT-base-bi (unsup.) Spearman Correlation 0.7509 # 12
Semantic Textual Similarity STS12 Trans-Encoder-RoBERTa-large-cross (unsup.) Spearman Correlation 0.7828 # 8
Semantic Textual Similarity STS12 Trans-Encoder-RoBERTa-base-cross (unsup.) Spearman Correlation 0.7637 # 11
Semantic Textual Similarity STS12 Trans-Encoder-BERT-large-bi (unsup.) Spearman Correlation 0.7819 # 9
Semantic Textual Similarity STS13 Trans-Encoder-BERT-large-cross (unsup.) Spearman Correlation 0.8831 # 9
Semantic Textual Similarity STS13 Trans-Encoder-BERT-large-bi (unsup.) Spearman Correlation 0.8851 # 8
Semantic Textual Similarity STS13 Trans-Encoder-RoBERTa-large-cross (unsup.) Spearman Correlation 0.8831 # 9
Semantic Textual Similarity STS13 Trans-Encoder-BERT-base-cross (unsup.) Spearman Correlation 0.8559 # 12
Semantic Textual Similarity STS13 Trans-Encoder-BERT-base-bi (unsup.) Spearman Correlation 0.851 # 13
Semantic Textual Similarity STS14 Trans-Encoder-BERT-large-bi (unsup.) Spearman Correlation 0.8137 # 11
Semantic Textual Similarity STS14 Trans-Encoder-BERT-base-bi (unsup.) Spearman Correlation 0.779 # 13
Semantic Textual Similarity STS14 Trans-Encoder-RoBERTa-large-bi (unsup.) Spearman Correlation 0.8176 # 10
Semantic Textual Similarity STS14 Trans-Encoder-RoBERTa-base-cross (unsup.) Spearman Correlation 0.7903 # 12
Semantic Textual Similarity STS14 Trans-Encoder-RoBERTa-large-cross (unsup.) Spearman Correlation 0.8194 # 9
Semantic Textual Similarity STS15 Trans-Encoder-BERT-base-bi (unsup.) Spearman Correlation 0.8508 # 12
Semantic Textual Similarity STS15 Trans-Encoder-BERT-base-cross (unsup.) Spearman Correlation 0.8444 # 13
Semantic Textual Similarity STS15 Trans-Encoder-RoBERTa-large-cross (unsup.) Spearman Correlation 0.8863 # 7
Semantic Textual Similarity STS15 Trans-Encoder-RoBERTa-base-cross (unsup.) Spearman Correlation 0.8577 # 11
Semantic Textual Similarity STS15 Trans-Encoder-BERT-large-bi (unsup.) Spearman Correlation 0.8816 # 8
Semantic Textual Similarity STS16 Trans-Encoder-BERT-large-bi (unsup.) Spearman Correlation 0.8481 # 9
Semantic Textual Similarity STS16 Trans-Encoder-BERT-base-bi (unsup.) Spearman Correlation 0.8305 # 12
Semantic Textual Similarity STS16 Trans-Encoder-RoBERTa-large-cross (unsup.) Spearman Correlation 0.8503 # 7
Semantic Textual Similarity STS16 Trans-Encoder-RoBERTa-base-cross (unsup.) Spearman Correlation 0.8377 # 11
Semantic Textual Similarity STS Benchmark Trans-Encoder-BERT-large-bi (unsup.) Spearman Correlation 0.8616 # 22
Semantic Textual Similarity STS Benchmark Trans-Encoder-RoBERTa-large-bi (unsup.) Spearman Correlation 0.8655 # 19
Semantic Textual Similarity STS Benchmark Trans-Encoder-RoBERTa-base-cross (unsup.) Spearman Correlation 0.8465 # 26
Semantic Textual Similarity STS Benchmark Trans-Encoder-RoBERTa-large-cross (unsup.) Spearman Correlation 0.867 # 17
Semantic Textual Similarity STS Benchmark Trans-Encoder-BERT-base-bi (unsup.) Spearman Correlation 0.839 # 29

Methods