Generative Question Answering
8 papers with code • 2 benchmarks • 3 datasets
Most implemented papers
Unified Language Model Pre-training for Natural Language Understanding and Generation
This paper presents a new Unified pre-trained Language Model (UniLM) that can be fine-tuned for both natural language understanding and generation tasks.
CoQA: A Conversational Question Answering Challenge
Humans gather information by engaging in conversations involving a series of interconnected questions and answers.
ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation
Current pre-training works in natural language generation pay little attention to the problem of exposure bias on downstream tasks.
PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation
An extensive set of experiments show that PALM achieves new state-of-the-art results on a variety of language generation benchmarks covering generative question answering (Rank 1 on the official MARCO leaderboard), abstractive summarization on CNN/DailyMail as well as Gigaword, question generation on SQuAD, and conversational response generation on Cornell Movie Dialogues.
General-Purpose Question-Answering with Macaw
Despite the successes of pretrained language models, there are still few high-quality, general-purpose QA systems that are freely available.
Neural Generative Question Answering
Empirical study shows the proposed model can effectively deal with the variations of questions and answers, and generate right and natural answers by referring to the facts in the knowledge-base.
KPQA: A Metric for Generative Question Answering Using Keyphrase Weights
To evaluate our metric, we create high-quality human judgments of correctness on two GenQA datasets.
Retrieval-Augmented Generative Question Answering for Event Argument Extraction
We propose a retrieval-augmented generative QA model (R-GQA) for event argument extraction.