Text Generation
1492 papers with code • 167 benchmarks • 150 datasets
Text Generation is the task of generating text with the goal of appearing indistinguishable to human-written text. This task is more formally known as "natural language generation" in the literature.
Text generation can be addressed with Markov processes or deep generative models like LSTMs. Recently, some of the most advanced methods for text generation include BART, GPT and other GAN-based approaches. Text generation systems are evaluated either through human ratings or automatic evaluation metrics like METEOR, ROUGE, and BLEU.
Further readings:
( Image credit: Adversarial Ranking for Language Generation )
Libraries
Use these libraries to find Text Generation models and implementationsSubtasks
- Dialogue Generation
- Data-to-Text Generation
- Multi-Document Summarization
- Text Style Transfer
- Text Style Transfer
- Story Generation
- Paraphrase Generation
- Spelling Correction
- Table-to-Text Generation
- Headline Generation
- Conditional Text Generation
- Visual Storytelling
- Text Infilling
- Distractor Generation
- News Generation
- Question-Answer-Generation
- Story Completion
- Code Documentation Generation
- Concept-To-Text Generation
- Paper generation
- Sonnet Generation
- Profile Generation
- Fact-based Text Editing
- Rules-of-thumb Generation
- Molecular description generation
- Natural Language Landmark Navigation Instructions Generation
Latest papers
LLM Attributor: Interactive Visual Attribution for LLM Generation
Our library offers a new way to quickly attribute an LLM's text generation to training data points to inspect model behaviors, enhance its trustworthiness, and compare model-generated text with user-provided text.
Set-Aligning Framework for Auto-Regressive Event Temporal Graph Generation
Recent studies, which employ pre-trained language models to auto-regressively generate linearised graphs for constructing event temporal graphs, have shown promising results.
Developing Safe and Responsible Large Language Models -- A Comprehensive Framework
We introduce Safe and Responsible Large Language Model (SR$_{\text{LLM}}$) , a model designed to enhance the safety of language generation using LLMs.
On the True Distribution Approximation of Minimum Bayes-Risk Decoding
Minimum Bayes-risk (MBR) decoding has recently gained renewed attention in text generation.
Humane Speech Synthesis through Zero-Shot Emotion and Disfluency Generation
Contemporary conversational systems often present a significant limitation: their responses lack the emotional depth and disfluent characteristic of human interactions.
The Impact of Prompts on Zero-Shot Detection of AI-Generated Text
In this paper, we introduce an evaluative framework to empirically analyze the impact of prompts on the detection accuracy of AI-generated text.
OmniParser: A Unified Framework for Text Spotting, Key Information Extraction and Table Recognition
Recently, visually-situated text parsing (VsTP) has experienced notable advancements, driven by the increasing demand for automated document understanding and the emergence of Generative Large Language Models (LLMs) capable of processing document-based questions.
LITA: Language Instructed Temporal-Localization Assistant
In addition to leveraging existing video datasets with timestamps, we propose a new task, Reasoning Temporal Localization (RTL), along with the dataset, ActivityNet-RTL, for learning and evaluating this task.
Towards LLM-RecSys Alignment with Textual ID Learning
The results show that the zero-shot performance of the pre-trained foundation model is comparable to or even better than some traditional recommendation models based on supervised training, showing the potential of the IDGen paradigm serving as the foundation model for generative recommendation.
ILLUMINER: Instruction-tuned Large Language Models as Few-shot Intent Classifier and Slot Filler
State-of-the-art intent classification (IC) and slot filling (SF) methods often rely on data-intensive deep learning models, limiting their practicality for industry applications.