Text Generation
1505 papers with code • 21 benchmarks • 115 datasets
Text Generation is the task of generating text with the goal of appearing indistinguishable to human-written text. This task is more formally known as "natural language generation" in the literature.
Text generation can be addressed with Markov processes or deep generative models like LSTMs. Recently, some of the most advanced methods for text generation include BART, GPT and other GAN-based approaches. Text generation systems are evaluated either through human ratings or automatic evaluation metrics like METEOR, ROUGE, and BLEU.
Further readings:
( Image credit: Adversarial Ranking for Language Generation )
Libraries
Use these libraries to find Text Generation models and implementationsDatasets
Subtasks
- Dialogue Generation
- Data-to-Text Generation
- Multi-Document Summarization
- Text Style Transfer
- Text Style Transfer
- Story Generation
- Paraphrase Generation
- Spelling Correction
- Table-to-Text Generation
- Headline Generation
- Conditional Text Generation
- Visual Storytelling
- Text Infilling
- Distractor Generation
- Question-Answer-Generation
- News Generation
- Story Completion
- Code Documentation Generation
- Concept-To-Text Generation
- Paper generation
- Hint Generation
- Profile Generation
- Sonnet Generation
- Fact-based Text Editing
- Rules-of-thumb Generation
- Molecular description generation
- Natural Language Landmark Navigation Instructions Generation
Latest papers
Parameter-Efficient Fine-Tuning with Discrete Fourier Transform
Low-rank adaptation~(LoRA) has recently gained much interest in fine-tuning foundation models.
Enhancing Contextual Understanding in Large Language Models through Contrastive Decoding
Large language models (LLMs) tend to inadequately integrate input context during text generation, relying excessively on encoded prior knowledge in model parameters, potentially resulting in generated text with factual inconsistencies or contextually unfaithful content.
Countering Reward Over-optimization in LLM with Demonstration-Guided Reinforcement Learning
While Reinforcement Learning (RL) has been proven essential for tuning large language models (LLMs), it can lead to reward over-optimization (ROO).
PECC: Problem Extraction and Coding Challenges
Recent advancements in large language models (LLMs) have showcased their exceptional abilities across various tasks, such as code generation, problem-solving and reasoning.
Simulating Task-Oriented Dialogues with State Transition Graphs and Large Language Models
In our experiments, using graph-guided response simulations leads to significant improvements in intent classification, slot filling and response relevance compared to naive single-prompt simulated conversations.
LaDiC: Are Diffusion Models Really Inferior to Autoregressive Counterparts for Image-to-Text Generation?
Diffusion models have exhibited remarkable capabilities in text-to-image generation.
Bridging the Gap between Different Vocabularies for LLM Ensemble
Ensembling different large language models (LLMs) to unleash their complementary potential and harness their individual strengths is highly valuable.
WikiSplit++: Easy Data Refinement for Split and Rephrase
The task of Split and Rephrase, which splits a complex sentence into multiple simple sentences with the same meaning, improves readability and enhances the performance of downstream tasks in natural language processing (NLP).
Continuous Language Model Interpolation for Dynamic and Controllable Text Generation
We empirically show that varying the interpolation weights yields predictable and consistent change in the model outputs with respect to all of the controlled attributes.
Control-DAG: Constrained Decoding for Non-Autoregressive Directed Acyclic T5 using Weighted Finite State Automata
The Directed Acyclic Transformer is a fast non-autoregressive (NAR) model that performs well in Neural Machine Translation.