Text generation is the task of generating text with the goal of appearing indistinguishable to human-written text.
( Image credit: Adversarial Ranking for Language Generation )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Document grounded generation is the task of using the information provided in a document to improve text generation.
Many current artificial general intelligence (AGI) and natural language processing (NLP) architectures do not possess general conversational intelligence--that is, they either do not deal with language or are unable to convey knowledge in a form similar to the human language without manual, labor-intensive methods such as template-based customization.
ProphetNet is a pre-training based natural language generation method which shows powerful performance on English text summarization and question generation tasks.
In this paper, we introduce SciGen, a new challenge dataset for the task of reasoning-aware data-to-text generation consisting of tables from scientific articles and their corresponding descriptions.
Recent commonsense-reasoning tasks are typically discriminative in nature, where a model answers a multiple-choice question for a certain context.
We propose Future Discriminators for Generation (FUDGE), a flexible and modular method for controlled text generation.
Many of the existing style transfer benchmarks primarily focus on individual high-level semantic changes (e. g. positive to negative), which enable controllability at a high level but do not offer fine-grained control involving sentence structure, emphasis, and content of the sentence.
In the sketch stage, a skeleton is extracted by removing words which are conflict to the counterfactual condition, from the original ending.
Existing table question answering datasets contain abundant factual questions that primarily evaluate the query and schema comprehension capability of a system, but they fail to include questions that require complex reasoning and integration of information due to the constraint of the associated short-form answers.