MRPC
10 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in MRPC
Most implemented papers
Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning
Although pretrained language models can be fine-tuned to produce state-of-the-art results for a very wide range of language understanding tasks, the dynamics of this process are not well understood, especially in the low data regime.
BET: A Backtranslation Approach for Easy Data Augmentation in Transformer-based Paraphrase Identification Context
We call this approach BET by which we analyze the backtranslation data augmentation on the transformer-based architectures.
SupCL-Seq: Supervised Contrastive Learning for Downstream Optimized Sequence Representations
This paper introduces SupCL-Seq, which extends the supervised contrastive learning from computer vision to the optimization of sequence representations in NLP.
Towards Better Characterization of Paraphrases
To effectively characterize the nature of paraphrase pairs without expert human annotation, we proposes two new metrics: word position deviation (WPD) and lexical deviation (LD).
Enhancing Text Generation with Cooperative Training
Recently, there has been a surge in the use of generated data to enhance the performance of downstream models, largely due to the advancements in pre-trained language models.
Abstract Meaning Representation-Based Logic-Driven Data Augmentation for Logical Reasoning
Combining large language models with logical reasoning enhances their capacity to address problems in a robust and reliable manner.
Exploring RWKV for Sentence Embeddings: Layer-wise Analysis and Baseline Comparison for Semantic Similarity
This paper investigates the efficacy of RWKV, a novel language model architecture known for its linear attention mechanism, for generating sentence embeddings in a zero-shot setting.