Text Style Transfer
80 papers with code • 2 benchmarks • 6 datasets
Text Style Transfer is the task of controlling certain attributes of generated text. The state-of-the-art methods can be categorized into two main types which are used on parallel and non-parallel data. Methods on parallel data are typically supervised methods that use a neural sequence-to-sequence model with the encoder-decoder architecture. Methods on non-parallel data are usually unsupervised approaches using Disentanglement, Prototype Editing and Pseudo-Parallel Corpus Construction.
The popular benchmark for this task is the Yelp Review Dataset. Models are typically evaluated with the metrics of Sentiment Accuracy, BLEU, and PPL.
Libraries
Use these libraries to find Text Style Transfer models and implementationsLatest papers with no code
LMStyle Benchmark: Evaluating Text Style Transfer for Chatbots
Since the breakthrough of ChatGPT, large language models (LLMs) have garnered significant attention in the research community.
Distilling Text Style Transfer With Self-Explanation From LLMs
Text Style Transfer (TST) seeks to alter the style of text while retaining its core content.
Unsupervised Text Style Transfer via LLMs and Attention Masking with Multi-way Interactions
Among existing methods for UTST tasks, attention masking approach and Large Language Models (LLMs) are deemed as two pioneering methods.
Text Detoxification as Style Transfer in English and Hindi
This task contributes to safer and more respectful online communication and can be considered a Text Style Transfer (TST) task, where the text style changes while its content is preserved.
Exploring Methods for Cross-lingual Text Style Transfer: The Case of Text Detoxification
Text detoxification is the task of transferring the style of text from toxic to neutral.
TSST: A Benchmark and Evaluation Models for Text Speech-Style Transfer
In summary, we present the TSST task, a new benchmark for style transfer and emphasizing human-oriented evaluation, exploring and advancing the performance of current LLMs.
Prefix-Tuning Based Unsupervised Text Style Transfer
Unsupervised text style transfer aims at training a generative model that can alter the style of the input sentence while preserving its content without using any parallel data.
Unsupervised Text Style Transfer with Deep Generative Models
We present a general framework for unsupervised text style transfer with deep generative models.
Text Style Transfer Evaluation Using Large Language Models
This suggests that LLMs could be a feasible alternative to human evaluation and other automated metrics in TST evaluation.
Learning Evaluation Models from Large Language Models for Sequence Generation
Large language models achieve state-of-the-art performance on sequence generation evaluation, but typically have a large number of parameters.