Paraphrase Generation

68 papers with code • 3 benchmarks • 16 datasets

Paraphrase Generation involves transforming a natural language sentence to a new sentence, that has the same semantic meaning but a different syntactic or lexical surface form.

Most implemented papers

Phrase-BERT: Improved Phrase Embeddings from BERT with an Application to Corpus Exploration

sf-wa-326/phrase-bert-topic-model EMNLP 2021

Phrase representations derived from BERT often do not exhibit complex phrasal compositionality, as the model relies instead on lexical similarity to determine semantic relatedness.

Towards Better Characterization of Paraphrases

tlkh/paraphrase-metrics ACL ARR September 2021

To effectively characterize the nature of paraphrase pairs without expert human annotation, we proposes two new metrics: word position deviation (WPD) and lexical deviation (LD).

Language Invariant Properties in Natural Language Processing

milanlproc/language-invariant-properties nlppower (ACL) 2022

We introduce language invariant properties: i. e., properties that should not change when we transform text, and how they can be used to quantitatively evaluate the robustness of transformation algorithms.

Aspect Sentiment Quad Prediction as Paraphrase Generation

isakzhang/absa-quad EMNLP 2021

Aspect-based sentiment analysis (ABSA) has been extensively studied in recent years, which typically involves four fundamental sentiment elements, including the aspect category, aspect term, opinion term, and sentiment polarity.

Improving Non-autoregressive Generation with Mixup Training

kongds/mist 21 Oct 2021

While pre-trained language models have achieved great success on various natural language understanding tasks, how to effectively leverage them into non-autoregressive generation tasks remains a challenge.

Ask me in your own words: paraphrasing for multitask question answering

ghomasHudson/paraphraseDecanlpCorpus PeerJ Computer Science 2021

Multitask learning has led to significant advances in Natural Language Processing, including the decaNLP benchmark where question answering is used to frame 10 natural language understanding tasks in a single model.

Visual Information Guided Zero-Shot Paraphrase Generation

l-zhe/vipg COLING 2022

Zero-shot paraphrase generation has drawn much attention as the large-scale high-quality paraphrase corpus is limited.

On the Evaluation Metrics for Paraphrase Generation

shadowkiller33/parascore 17 Feb 2022

In this paper we revisit automatic metrics for paraphrase evaluation and obtain two findings that disobey conventional wisdom: (1) Reference-free metrics achieve better performance than their reference-based counterparts.