MRPC

9 papers with code • 0 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Datasets


Most implemented papers

Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning

rabeehk/compacter ACL 2021

Although pretrained language models can be fine-tuned to produce state-of-the-art results for a very wide range of language understanding tasks, the dynamics of this process are not well understood, especially in the low data regime.

BET: A Backtranslation Approach for Easy Data Augmentation in Transformer-based Paraphrase Identification Context

jpcorb20/bet-backtranslation-paraphrase-experiment 25 Sep 2020

We call this approach BET by which we analyze the backtranslation data augmentation on the transformer-based architectures.

SupCL-Seq: Supervised Contrastive Learning for Downstream Optimized Sequence Representations

hooman650/supcl-seq Findings (EMNLP) 2021

This paper introduces SupCL-Seq, which extends the supervised contrastive learning from computer vision to the optimization of sequence representations in NLP.

Towards Better Characterization of Paraphrases

tlkh/paraphrase-metrics ACL ARR September 2021

To effectively characterize the nature of paraphrase pairs without expert human annotation, we proposes two new metrics: word position deviation (WPD) and lexical deviation (LD).

Enhancing Text Generation with Cooperative Training

wutong4012/Self-Consistent-Learning 16 Mar 2023

Recently, there has been a surge in the use of generated data to enhance the performance of downstream models, largely due to the advancements in pre-trained language models.

Abstract Meaning Representation-Based Logic-Driven Data Augmentation for Logical Reasoning

strong-ai-lab/logical-equivalence-driven-amr-data-augmentation-for-representation-learning 21 May 2023

Combining large language models with logical reasoning enhances their capacity to address problems in a robust and reliable manner.