# QQP

6 papers with code • 1 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

# From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression

14 Dec 2021

Unified in contrastive learning, CAP enables the pruned model to learn from the pre-trained model for task-agnostic knowledge, and fine-tuned model for task-specific knowledge.

2

# On the Importance of Adaptive Data Collection for Extremely Imbalanced Pairwise Tasks

Many pairwise classification tasks, such as paraphrase detection and open-domain question answering, naturally have extreme label imbalance (e. g., $99. 99\%$ of examples are negatives).

1

# Are Larger Pretrained Language Models Uniformly Better? Comparing Performance at the Instance Level

We develop statistically rigorous methods to address this, and after accounting for pretraining and finetuning noise, we find that our BERT-Large is worse than BERT-Mini on at least 1-4% of instances across MNLI, SST-2, and QQP, compared to the overall accuracy improvement of 2-10%.

1

# LEAP: Learnable Pruning for Transformer-based Models

30 May 2021

Moreover, in order to reduce hyperparameter tuning, a novel adaptive regularization coefficient is deployed to control the regularization penalty adaptively.

1

# Contrastive Representation Learning for Exemplar-Guided Paraphrase Generation

Exemplar-Guided Paraphrase Generation (EGPG) aims to generate a target sentence which conforms to the style of the given exemplar while encapsulating the content information of the source sentence.

1

# Linear Connectivity Reveals Generalization Strategies

24 May 2022

It is widely accepted in the mode connectivity literature that when two neural networks are trained similarly on the same data, they are connected by a path through parameter space over which test set accuracy is maintained.

1