Text Style Transfer

80 papers with code • 2 benchmarks • 6 datasets

Text Style Transfer is the task of controlling certain attributes of generated text. The state-of-the-art methods can be categorized into two main types which are used on parallel and non-parallel data. Methods on parallel data are typically supervised methods that use a neural sequence-to-sequence model with the encoder-decoder architecture. Methods on non-parallel data are usually unsupervised approaches using Disentanglement, Prototype Editing and Pseudo-Parallel Corpus Construction.

The popular benchmark for this task is the Yelp Review Dataset. Models are typically evaluated with the metrics of Sentiment Accuracy, BLEU, and PPL.

Libraries

Use these libraries to find Text Style Transfer models and implementations

Latest papers with no code

LMStyle Benchmark: Evaluating Text Style Transfer for Chatbots

no code yet • 13 Mar 2024

Since the breakthrough of ChatGPT, large language models (LLMs) have garnered significant attention in the research community.

Distilling Text Style Transfer With Self-Explanation From LLMs

no code yet • 2 Mar 2024

Text Style Transfer (TST) seeks to alter the style of text while retaining its core content.

Unsupervised Text Style Transfer via LLMs and Attention Masking with Multi-way Interactions

no code yet • 21 Feb 2024

Among existing methods for UTST tasks, attention masking approach and Large Language Models (LLMs) are deemed as two pioneering methods.

Text Detoxification as Style Transfer in English and Hindi

no code yet • 12 Feb 2024

This task contributes to safer and more respectful online communication and can be considered a Text Style Transfer (TST) task, where the text style changes while its content is preserved.

Exploring Methods for Cross-lingual Text Style Transfer: The Case of Text Detoxification

no code yet • 23 Nov 2023

Text detoxification is the task of transferring the style of text from toxic to neutral.

TSST: A Benchmark and Evaluation Models for Text Speech-Style Transfer

no code yet • 14 Nov 2023

In summary, we present the TSST task, a new benchmark for style transfer and emphasizing human-oriented evaluation, exploring and advancing the performance of current LLMs.

Prefix-Tuning Based Unsupervised Text Style Transfer

no code yet • 23 Oct 2023

Unsupervised text style transfer aims at training a generative model that can alter the style of the input sentence while preserving its content without using any parallel data.

Unsupervised Text Style Transfer with Deep Generative Models

no code yet • 31 Aug 2023

We present a general framework for unsupervised text style transfer with deep generative models.

Text Style Transfer Evaluation Using Large Language Models

no code yet • 25 Aug 2023

This suggests that LLMs could be a feasible alternative to human evaluation and other automated metrics in TST evaluation.

Learning Evaluation Models from Large Language Models for Sequence Generation

no code yet • 8 Aug 2023

Large language models achieve state-of-the-art performance on sequence generation evaluation, but typically have a large number of parameters.