650 papers with code • 13 benchmarks • 67 datasets
Text generation is the task of generating text with the goal of appearing indistinguishable to human-written text. This task if more formally known as "natural language generation" in the literature.
Text generation can be addressed with Markov processes or deep generative models like LSTMs. Recently, some of the most advanced methods for text generation include BART, GPT and other GAN-based approaches. Text generation systems are evaluated either through human ratings or automatic evaluation metrics like METEOR, ROUGE, and BLEU.
( Image credit: Adversarial Ranking for Language Generation )
- Dialogue Generation
- Data-to-Text Generation
- Multi-Document Summarization
- Text Style Transfer
- Text Style Transfer
- Paraphrase Generation
- Story Generation
- Spelling Correction
- Table-to-Text Generation
- Conditional Text Generation
- Visual Storytelling
- Text Infilling
- Distractor Generation
- Story Completion
- News Generation
- Paper generation
- Concept-To-Text Generation
- Sonnet Generation
- Fact-based Text Editing
- Natural Language Landmark Navigation Instructions Generation
We show that with the help of a content-rich discrete visual codebook from VQ-VAE, the discrete diffusion model can also generate high fidelity images with global context, which compensates for the deficiency of the classical autoregressive model along pixel space.
Previous works leverage logical forms to facilitate logical knowledge-conditioned text generation.
In this paper, we introduce InfoLM a family of untrained metrics that can be viewed as a string-based metric that addresses the aforementioned flaws thanks to a pre-trained masked language model.
To this end, we propose a novel video captioning method that generates a sentence by first constructing a multi-modal dependency tree and then traversing the constructed tree, where the syntactic structure and semantic relationship in the sentence are represented by the tree topology.
Controllable text generation concerns two fundamental tasks of wide applications, namely generating text of given attributes (i. e., attribute-conditional generation), and minimally editing existing text to possess desired attributes (i. e., text attribute transfer).
We utilize the pre-trained multi-context ConveRT model for context representation in a model trained from scratch; and leverage the immediate preceding user utterance for context generation in a model adapted from the pre-trained GPT-2.
Generating user activity is a key capability for both evaluating security monitoring tools as well as improving the credibility of attacker analysis platforms (e. g., honeynets).