Text Generation

650 papers with code • 13 benchmarks • 67 datasets

Text generation is the task of generating text with the goal of appearing indistinguishable to human-written text. This task if more formally known as "natural language generation" in the literature.

Text generation can be addressed with Markov processes or deep generative models like LSTMs. Recently, some of the most advanced methods for text generation include BART, GPT and other GAN-based approaches. Text generation systems are evaluated either through human ratings or automatic evaluation metrics like METEOR, ROUGE, and BLEU.

Further readings:

( Image credit: Adversarial Ranking for Language Generation )

Latest papers with code

Translation-equivariant Image Quantizer for Bi-directional Image-Text Generation

lucidrains/vector-quantize-pytorch 1 Dec 2021

Recently, vector-quantized image modeling has demonstrated impressive performance on generation tasks such as text-to-image generation.

Text Generation Text-to-Image Generation +1

01 Dec 2021

Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic

yoadtew/zero-shot-image-to-text 29 Nov 2021

Recent text-to-image matching models apply contrastive learning to large corpora of uncurated pairs of images and sentences.

Contrastive Learning Language Modelling +2

29 Nov 2021

L-Verse: Bidirectional Generation Between Image and Text

tgisaturday/L-Verse 22 Nov 2021

Unlike other models, BiART can distinguish between image (or text) as a conditional reference and a generation target.

Image Captioning Image Reconstruction +4

22 Nov 2021

A Novel Corpus of Discourse Structure in Humans and Computers

sfeucht/annotation_evaluation 10 Nov 2021

We present a novel corpus of 445 human- and computer-generated documents, comprising about 27, 000 clauses, annotated for semantic clause types and coherence relations that allow for nuanced comparison of artificial and natural discourse modes.

Text Generation

10 Nov 2021

Explaining Face Presentation Attack Detection Using Natural Language

isicv/padisi_usc_dataset 8 Nov 2021

Due to the limited amount of annotated data in our study, we apply a light-weight LSTM network as our natural language generation model.

Face Presentation Attack Detection Language Modelling +1

08 Nov 2021

Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A Survey

ThanhPham1987/NLP-Overview 1 Nov 2021

Large, pre-trained transformer-based language models such as BERT have drastically changed the Natural Language Processing (NLP) field.

Fine-tuning Text Generation

01 Nov 2021

REBEL: Relation Extraction By End-to-end Language generation

Babelscape/rebel Findings (EMNLP) 2021

Extracting relation triplets from raw text is a crucial task in Information Extraction, enabling multiple applications such as populating or validating knowledge bases, factchecking, and other downstream tasks.

 Ranked #1 on Relation Extraction on NYT (using extra training data)

Entity Linking Fine-tuning +3

29 Oct 2021

s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning

microsoft/unilm 26 Oct 2021

Pretrained bidirectional Transformers, such as BERT, have achieved significant improvements in a wide variety of language understanding tasks, while it is not straightforward to directly apply them for natural language generation.

Abstractive Text Summarization Fine-tuning +3

26 Oct 2021

Unifying Multimodal Transformer for Bi-directional Image and Text Generation

researchmm/generate-it 19 Oct 2021

We adopt Transformer as our unified architecture for its strong performance and task-agnostic design.

Text Generation Text-to-Image Generation

19 Oct 2021

Permutation invariant graph-to-sequence model for template-free retrosynthesis and reaction prediction

coleygroup/graph2smiles 19 Oct 2021

Synthesis planning and reaction outcome prediction are two fundamental problems in computer-aided organic chemistry for which a variety of data-driven approaches have emerged.

Data Augmentation Graph-to-Sequence +3

19 Oct 2021