Text Generation

1505 papers with code • 21 benchmarks • 115 datasets

Text Generation is the task of generating text with the goal of appearing indistinguishable to human-written text. This task is more formally known as "natural language generation" in the literature.

Text generation can be addressed with Markov processes or deep generative models like LSTMs. Recently, some of the most advanced methods for text generation include BART, GPT and other GAN-based approaches. Text generation systems are evaluated either through human ratings or automatic evaluation metrics like METEOR, ROUGE, and BLEU.

Further readings:

( Image credit: Adversarial Ranking for Language Generation )

Libraries

Use these libraries to find Text Generation models and implementations
10 papers
125,796
6 papers
204

Parameter-Efficient Fine-Tuning with Discrete Fourier Transform

chaos96/fourierft 5 May 2024

Low-rank adaptation~(LoRA) has recently gained much interest in fine-tuning foundation models.

7
05 May 2024

Enhancing Contextual Understanding in Large Language Models through Contrastive Decoding

amazon-science/contextualunderstanding-contrastivedecoding 4 May 2024

Large language models (LLMs) tend to inadequately integrate input context during text generation, relying excessively on encoded prior knowledge in model parameters, potentially resulting in generated text with factual inconsistencies or contextually unfaithful content.

2
04 May 2024

Countering Reward Over-optimization in LLM with Demonstration-Guided Reinforcement Learning

mathieurita/llm_demonstration_guided_rl 30 Apr 2024

While Reinforcement Learning (RL) has been proven essential for tuning large language models (LLMs), it can lead to reward over-optimization (ROO).

0
30 Apr 2024

PECC: Problem Extraction and Coding Challenges

hallerpatrick/pecc 29 Apr 2024

Recent advancements in large language models (LLMs) have showcased their exceptional abilities across various tasks, such as code generation, problem-solving and reasoning.

4
29 Apr 2024

Simulating Task-Oriented Dialogues with State Transition Graphs and Large Language Models

algoprog/syntod 23 Apr 2024

In our experiments, using graph-guided response simulations leads to significant improvements in intent classification, slot filling and response relevance compared to naive single-prompt simulated conversations.

5
23 Apr 2024

LaDiC: Are Diffusion Models Really Inferior to Autoregressive Counterparts for Image-to-Text Generation?

wangyuchi369/ladic 16 Apr 2024

Diffusion models have exhibited remarkable capabilities in text-to-image generation.

26
16 Apr 2024

Bridging the Gap between Different Vocabularies for LLM Ensemble

xydaytoy/eva 15 Apr 2024

Ensembling different large language models (LLMs) to unleash their complementary potential and harness their individual strengths is highly valuable.

5
15 Apr 2024

WikiSplit++: Easy Data Refinement for Split and Rephrase

nttcslab-nlp/wikisplit-pp 13 Apr 2024

The task of Split and Rephrase, which splits a complex sentence into multiple simple sentences with the same meaning, improves readability and enhances the performance of downstream tasks in natural language processing (NLP).

0
13 Apr 2024

Continuous Language Model Interpolation for Dynamic and Controllable Text Generation

faceonlive/ai-research 10 Apr 2024

We empirically show that varying the interpolation weights yields predictable and consistent change in the model outputs with respect to all of the controlled attributes.

208
10 Apr 2024

Control-DAG: Constrained Decoding for Non-Autoregressive Directed Acyclic T5 using Weighted Finite State Automata

faceonlive/ai-research 10 Apr 2024

The Directed Acyclic Transformer is a fast non-autoregressive (NAR) model that performs well in Neural Machine Translation.

208
10 Apr 2024