Text Generation

650 papers with code • 13 benchmarks • 67 datasets

Text generation is the task of generating text with the goal of appearing indistinguishable to human-written text. This task if more formally known as "natural language generation" in the literature.

Text generation can be addressed with Markov processes or deep generative models like LSTMs. Recently, some of the most advanced methods for text generation include BART, GPT and other GAN-based approaches. Text generation systems are evaluated either through human ratings or automatic evaluation metrics like METEOR, ROUGE, and BLEU.

Further readings:

( Image credit: Adversarial Ranking for Language Generation )

Latest papers without code

Global Context with Discrete Diffusion in Vector Quantised Modelling for Image Generation

no code yet • 3 Dec 2021

We show that with the help of a content-rich discrete visual codebook from VQ-VAE, the discrete diffusion model can also generate high fidelity images with global context, which compensates for the deficiency of the classical autoregressive model along pixel space.

Denoising Image Inpainting +1

LOGEN: Few-shot Logical Knowledge-Conditioned Text Generation with Self-training

no code yet • 2 Dec 2021

Previous works leverage logical forms to facilitate logical knowledge-conditioned text generation.

Text Generation

InfoLM: A New Metric to Evaluate Summarization & Data2Text Generation

no code yet • 2 Dec 2021

In this paper, we introduce InfoLM a family of untrained metrics that can be viewed as a string-based metric that addresses the aforementioned flaws thanks to a pre-trained masked language model.

Language Modelling Text Generation

Multi-modal Dependency Tree for Video Captioning

no code yet • NeurIPS 2021

To this end, we propose a novel video captioning method that generates a sentence by first constructing a multi-modal dependency tree and then traversing the constructed tree, where the syntactic structure and semantic relationship in the sentence are represented by the tree topology.

Dependency Parsing Hierarchical structure +3

A Causal Lens for Controllable Text Generation

no code yet • NeurIPS 2021

Controllable text generation concerns two fundamental tasks of wide applications, namely generating text of given attributes (i. e., attribute-conditional generation), and minimally editing existing text to possess desired attributes (i. e., text attribute transfer).

Causal Inference Text Attribute Transfer +1

Improvement in Machine Translation with Generative Adversarial Networks

no code yet • 30 Nov 2021

In this paper, we explore machine translation improvement via Generative Adversarial Network (GAN) architecture.

Machine Translation Text Generation +1

Context Matters in Semantically Controlled Language Generation for Task-oriented Dialogue Systems

no code yet • 28 Nov 2021

We utilize the pre-trained multi-context ConveRT model for context representation in a model trained from scratch; and leverage the immediate preceding user utterance for context generation in a model adapted from the pre-trained GPT-2.

Task-Oriented Dialogue Systems Text Generation

Octree Transformer: Autoregressive 3D Shape Generation on Hierarchically Structured Sequences

no code yet • 24 Nov 2021

Autoregressive models have proven to be very powerful in NLP text generation tasks and lately have gained popularity for image generation as well.

3D Shape Generation Image Generation +1

Crossing the Format Boundary of Text and Boxes: Towards Unified Vision-Language Modeling

no code yet • 23 Nov 2021

In this paper, we propose UNICORN, a vision-language (VL) model that unifies text generation and bounding box prediction into a single architecture.

Image Captioning Language Modelling +5

Realistic simulation of users for IT systems in cyber ranges

no code yet • 23 Nov 2021

Generating user activity is a key capability for both evaluating security monitoring tools as well as improving the credibility of attacker analysis platforms (e. g., honeynets).

Conditional Text Generation