Text generation is the task of generating text with the goal of appearing indistinguishable to human-written text.
( Image credit: Adversarial Ranking for Language Generation )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
In text generation evaluation, many practical issues, such as inconsistent experimental settings and metric implementations, are often ignored but lead to unfair evaluation and untenable conclusions.
Audio-Visual Scene-Aware Dialog (AVSD) is a task to generate responses when chatting about a given video, which is organized as a track of the 8th Dialog System Technology Challenge (DSTC8).
In our experiments, we demonstrate that our models lead to significant improvements in KG-to-text generation, achieving BLEU scores of 17. 81 on AGENDA dataset, and 63. 10 on the WebNLG dataset for seen categories, outperforming the state of the art by 3. 51 and 2. 51 points, respectively.
This knowledge graph is then automatically completed utilizing thematic knowledge and used to guide a neural language generation model that fleshes out the rest of the world.
Interactive Fiction games are text-based simulations in which an agent interacts with the world purely through natural language.
To further capture the causal and temporal dependencies between the sentences in a reasonable story, we employ multi-task learning which combines a discriminative objective to distinguish true and fake stories during fine-tuning.
Data-to-text generation models face challenges in ensuring data fidelity by referring to the correct input source.
PatentTransformer is our codename for patent text generation based on Transformer-based models.
Large pretrained language models have changed the way researchers approach discriminative natural language understanding tasks, leading to the dominance of approaches that adapt a pretrained model for arbitrary downstream tasks.