Text Generation
1495 papers with code • 21 benchmarks • 150 datasets
Text Generation is the task of generating text with the goal of appearing indistinguishable to human-written text. This task is more formally known as "natural language generation" in the literature.
Text generation can be addressed with Markov processes or deep generative models like LSTMs. Recently, some of the most advanced methods for text generation include BART, GPT and other GAN-based approaches. Text generation systems are evaluated either through human ratings or automatic evaluation metrics like METEOR, ROUGE, and BLEU.
Further readings:
( Image credit: Adversarial Ranking for Language Generation )
Libraries
Use these libraries to find Text Generation models and implementationsSubtasks
- Dialogue Generation
- Data-to-Text Generation
- Multi-Document Summarization
- Text Style Transfer
- Text Style Transfer
- Story Generation
- Paraphrase Generation
- Spelling Correction
- Table-to-Text Generation
- Headline Generation
- Conditional Text Generation
- Visual Storytelling
- Text Infilling
- Distractor Generation
- News Generation
- Question-Answer-Generation
- Story Completion
- Code Documentation Generation
- Concept-To-Text Generation
- Paper generation
- Sonnet Generation
- Profile Generation
- Fact-based Text Editing
- Rules-of-thumb Generation
- Molecular description generation
- Hint Generation
- Natural Language Landmark Navigation Instructions Generation
Latest papers with no code
Can We Catch the Elephant? The Evolvement of Hallucination Evaluation on Natural Language Generation: A Survey
Hallucination in Natural Language Generation (NLG) is like the elephant in the room, obvious but often overlooked until recent achievements significantly improved the fluency and grammatical accuracy of generated text.
iRAG: An Incremental Retrieval Augmented Generation System for Videos
Use of RAG for combined understanding of multimodal data such as text, images and videos is appealing but two critical limitations exist: one-time, upfront capture of all content in large multimodal data as text descriptions entails high processing times, and not all information in the rich multimodal data is typically in the text descriptions.
From $r$ to $Q^*$: Your Language Model is Secretly a Q-Function
Standard RLHF deploys reinforcement learning in a specific token-level MDP, while DPO is derived as a bandit problem in which the whole response of the model is treated as a single arm.
Prompt-Guided Generation of Structured Chest X-Ray Report Using a Pre-trained LLM
Our method introduces a prompt-guided approach to generate structured chest X-ray reports using a pre-trained large language model (LLM).
Related Work and Citation Text Generation: A Survey
To convince readers of the novelty of their research paper, authors must perform a literature review and compose a coherent story that connects and relates prior works to the current work.
A Survey on Retrieval-Augmented Text Generation for Large Language Models
Retrieval-Augmented Generation (RAG) merges retrieval methods with deep learning advancements to address the static limitations of large language models (LLMs) by enabling the dynamic integration of up-to-date external information.
Modeling Low-Resource Health Coaching Dialogues via Neuro-Symbolic Goal Summarization and Text-Units-Text Generation
Health coaching helps patients achieve personalized and lifestyle-related goals, effectively managing chronic conditions and alleviating mental health issues.
Generative Text Steganography with Large Language Model
In this paper, we explore a black-box generative text steganographic method based on the user interfaces of large language models, which is called LLM-Stega.
KG-CTG: Citation Generation through Knowledge Graph-guided Large Language Models
Citation Text Generation (CTG) is a task in natural language processing (NLP) that aims to produce text that accurately cites or references a cited document within a source document.
Unveiling LLM Evaluation Focused on Metrics: Challenges and Solutions
The overarching goal is to furnish researchers with a pragmatic guide for effective LLM evaluation and metric selection, thereby advancing the understanding and application of these large language models.