Text Generation
1453 papers with code • 167 benchmarks • 149 datasets
Text Generation is the task of generating text with the goal of appearing indistinguishable to human-written text. This task if more formally known as "natural language generation" in the literature.
Text generation can be addressed with Markov processes or deep generative models like LSTMs. Recently, some of the most advanced methods for text generation include BART, GPT and other GAN-based approaches. Text generation systems are evaluated either through human ratings or automatic evaluation metrics like METEOR, ROUGE, and BLEU.
Further readings:
( Image credit: Adversarial Ranking for Language Generation )
Libraries
Use these libraries to find Text Generation models and implementationsSubtasks
- Dialogue Generation
- Data-to-Text Generation
- Multi-Document Summarization
- Text Style Transfer
- Text Style Transfer
- Story Generation
- Paraphrase Generation
- Spelling Correction
- Table-to-Text Generation
- Conditional Text Generation
- Headline Generation
- Visual Storytelling
- Text Infilling
- Distractor Generation
- News Generation
- Question-Answer-Generation
- Story Completion
- Code Documentation Generation
- Concept-To-Text Generation
- Paper generation
- Sonnet Generation
- Profile Generation
- Fact-based Text Editing
- Rules-of-thumb Generation
- Molecular description generation
- Natural Language Landmark Navigation Instructions Generation
Latest papers with no code
Improving Attributed Text Generation of Large Language Models via Preference Learning
Large language models have been widely adopted in natural language processing, yet they face the challenge of generating unreliable content.
Scaling Laws For Dense Retrieval
In this study, we investigate whether the performance of dense retrieval models follows the scaling law as other neural models.
SciNews: From Scholarly Complexities to Public Narratives -- A Dataset for Scientific News Report Generation
Scientific news reports serve as a bridge, adeptly translating complex research articles into reports that resonate with the broader public.
Language Models for Text Classification: Is In-Context Learning Enough?
This makes them suitable for addressing text classification problems for domains with limited amounts of annotated instances.
MapGuide: A Simple yet Effective Method to Reconstruct Continuous Language from Brain Activities
In contrast, we propose a simple yet effective method that guides text reconstruction by directly comparing them with the predicted text embeddings mapped from brain activities.
The Solution for the ICCV 2023 1st Scientific Figure Captioning Challenge
In this paper, we propose a solution for improving the quality of captions generated for figures in papers.
Automated Report Generation for Lung Cytological Images Using a CNN Vision Classifier and Multiple-Transformer Text Decoders: Preliminary Study
Independent text decoders for benign and malignant cells are prepared for text generation, and the text decoder switches according to the CNN classification results.
DORE: A Dataset For Portuguese Definition Generation
In this research, we fill this gap by introducing DORE; the first dataset for Definition MOdelling for PoRtuguEse containing more than 100, 000 definitions.
Dia-LLaMA: Towards Large Language Model-driven CT Report Generation
Medical report generation has achieved remarkable advancements yet has still been faced with several challenges.
Grammatical vs Spelling Error Correction: An Investigation into the Responsiveness of Transformer-based Language Models using BART and MarianMT
Text continues to remain a relevant form of representation for information.