Unsupervised Text Style Transfer
21 papers with code • 3 benchmarks • 3 datasets
Latest papers with no code
Unsupervised Text Style Transfer via LLMs and Attention Masking with Multi-way Interactions
Among existing methods for UTST tasks, attention masking approach and Large Language Models (LLMs) are deemed as two pioneering methods.
Prefix-Tuning Based Unsupervised Text Style Transfer
Unsupervised text style transfer aims at training a generative model that can alter the style of the input sentence while preserving its content without using any parallel data.
Unsupervised Text Style Transfer with Deep Generative Models
We present a general framework for unsupervised text style transfer with deep generative models.
StyleFlow: Disentangle Latent Representations via Normalizing Flow for Unsupervised Text Style Transfer
Since cycle construction helps to improve the style transfer ability of the model by rebuilding transferred sentences back to original-style sentences, it brings about a content loss in unsupervised text style transfer tasks.
Low Resource Style Transfer via Domain Adaptive Meta Learning
Text style transfer (TST) without parallel data has achieved some practical success.
Efficient Reinforcement Learning for Unsupervised Controlled Text Generation
A major challenge in applying RL to such tasks is the sparse reward, which is available only after the full text is generated.
Gradient-guided Unsupervised Text Style Transfer via Contrastive Learning
(2) Style misclassification.
Low Resource Style Transfer via Domain Adaptive Meta Learning
Text style transfer (TST) without parallel data has achieved some practical success.
Don't Take It Literally: An Edit-Invariant Sequence Loss for Text Generation
Such training objective is sub-optimal when the target sequence is not perfect, e. g., when the target sequence is corrupted with noises, or when only weak sequence supervision is available.
DAML-ST5: Low Resource Style Transfer via Domain Adaptive Meta Learning
Moreover, we propose a new unsupervised TST model Style-T5 (ST5), which is composed of a sequence-to-sequence pre-trained language model T5 and uses style adversarial training for better content preservation and style transfer.