Text Generation
1104 papers with code • 161 benchmarks • 129 datasets
Text Generation is the task of generating text with the goal of appearing indistinguishable to human-written text. This task if more formally known as "natural language generation" in the literature.
Text generation can be addressed with Markov processes or deep generative models like LSTMs. Recently, some of the most advanced methods for text generation include BART, GPT and other GAN-based approaches. Text generation systems are evaluated either through human ratings or automatic evaluation metrics like METEOR, ROUGE, and BLEU.
Further readings:
( Image credit: Adversarial Ranking for Language Generation )
Libraries
Use these libraries to find Text Generation models and implementationsSubtasks
-
Dialogue Generation
-
Data-to-Text Generation
-
Multi-Document Summarization
-
Text Style Transfer
-
Text Style Transfer
-
Story Generation
-
Paraphrase Generation
-
Spelling Correction
-
Table-to-Text Generation
-
Conditional Text Generation
-
Visual Storytelling
-
Text Infilling
-
Question-Answer-Generation
-
Story Completion
-
News Generation
-
Distractor Generation
-
Code Documentation Generation
-
Concept-To-Text Generation
-
Paper generation
-
Sonnet Generation
-
Profile Generation
-
Fact-based Text Editing
-
Rules-of-thumb Generation
-
Natural Language Landmark Navigation Instructions Generation
Most implemented papers
Show and Tell: A Neural Image Caption Generator
Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions.
Generating Sequences With Recurrent Neural Networks
This paper shows how Long Short-term Memory recurrent neural networks can be used to generate complex sequences with long-range structure, simply by predicting one data point at a time.
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token.
Learning Transferable Visual Models From Natural Language Supervision
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories.
Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models
We observe that our method consistently outperforms BS and previously proposed techniques for diverse decoding from neural sequence models.
SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient
As a new way of training generative models, Generative Adversarial Nets (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data.
Language Models are Unsupervised Multitask Learners
Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets.
BERTScore: Evaluating Text Generation with BERT
We propose BERTScore, an automatic evaluation metric for text generation.
Unified Language Model Pre-training for Natural Language Understanding and Generation
This paper presents a new Unified pre-trained Language Model (UniLM) that can be fine-tuned for both natural language understanding and generation tasks.
HuggingFace's Transformers: State-of-the-art Natural Language Processing
Transformer architectures have facilitated building higher-capacity models and pretraining has made it possible to effectively utilize this capacity for a wide variety of tasks.