CLEAR: Contrastive Learning for Sentence Representation

31 Dec 2020  ·  Zhuofeng Wu, Sinong Wang, Jiatao Gu, Madian Khabsa, Fei Sun, Hao Ma ·

Pre-trained language models have proven their unique powers in capturing implicit language features. However, most pre-training approaches focus on the word-level training objective, while sentence-level objectives are rarely studied. In this paper, we propose Contrastive LEArning for sentence Representation (CLEAR), which employs multiple sentence-level augmentation strategies in order to learn a noise-invariant sentence representation. These augmentations include word and span deletion, reordering, and substitution. Furthermore, we investigate the key reasons that make contrastive learning effective through numerous experiments. We observe that different sentence augmentations during pre-training lead to different performance improvements on various downstream tasks. Our approach is shown to outperform multiple existing methods on both SentEval and GLUE benchmarks.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Linguistic Acceptability CoLA MLM+ del-span+ reorder Accuracy 64.3% # 21
Semantic Textual Similarity MRPC MLM+ del-word+ reorder Accuracy 90.6% # 11
Natural Language Inference QNLI MLM+ subs+ del-span Accuracy 93.4% # 21
Question Answering Quora Question Pairs MLM+ subs+ del-span Accuracy 90.3% # 5
Natural Language Inference RTE MLM+ del-span Accuracy 79.8% # 37
Sentiment Analysis SST-2 Binary classification MLM+ del-word+ reorder Accuracy 94.5 # 34
Semantic Textual Similarity STS Benchmark MLM+ del-word Pearson Correlation 0.905 # 18

Methods