Paraphrase Generation
68 papers with code • 3 benchmarks • 16 datasets
Paraphrase Generation involves transforming a natural language sentence to a new sentence, that has the same semantic meaning but a different syntactic or lexical surface form.
Datasets
Most implemented papers
Phrase-BERT: Improved Phrase Embeddings from BERT with an Application to Corpus Exploration
Phrase representations derived from BERT often do not exhibit complex phrasal compositionality, as the model relies instead on lexical similarity to determine semantic relatedness.
Towards Document-Level Paraphrase Generation with Sentence Rewriting and Reordering
Paraphrase generation is an important task in natural language processing.
Towards Better Characterization of Paraphrases
To effectively characterize the nature of paraphrase pairs without expert human annotation, we proposes two new metrics: word position deviation (WPD) and lexical deviation (LD).
Language Invariant Properties in Natural Language Processing
We introduce language invariant properties: i. e., properties that should not change when we transform text, and how they can be used to quantitatively evaluate the robustness of transformation algorithms.
Aspect Sentiment Quad Prediction as Paraphrase Generation
Aspect-based sentiment analysis (ABSA) has been extensively studied in recent years, which typically involves four fundamental sentiment elements, including the aspect category, aspect term, opinion term, and sentiment polarity.
Improving Non-autoregressive Generation with Mixup Training
While pre-trained language models have achieved great success on various natural language understanding tasks, how to effectively leverage them into non-autoregressive generation tasks remains a challenge.
Improving the Diversity of Unsupervised Paraphrasing with Embedding Outputs
We present a novel technique for zero-shot paraphrase generation.
Ask me in your own words: paraphrasing for multitask question answering
Multitask learning has led to significant advances in Natural Language Processing, including the decaNLP benchmark where question answering is used to frame 10 natural language understanding tasks in a single model.
Visual Information Guided Zero-Shot Paraphrase Generation
Zero-shot paraphrase generation has drawn much attention as the large-scale high-quality paraphrase corpus is limited.
On the Evaluation Metrics for Paraphrase Generation
In this paper we revisit automatic metrics for paraphrase evaluation and obtain two findings that disobey conventional wisdom: (1) Reference-free metrics achieve better performance than their reference-based counterparts.