Generative Question Answering
11 papers with code • 2 benchmarks • 6 datasets
Latest papers
Reshaping Free-Text Radiology Notes Into Structured Reports With Generative Transformers
We propose a pipeline to extract information from free-text radiology reports, that fits with the items of the reference SR registry proposed by a national society of interventional and medical radiology, focusing on CT staging of patients with lymphoma.
Verif.ai: Towards an Open-Source Scientific Generative Question-Answering System with Referenced and Verifiable Answers
In this paper, we present the current progress of the project Verif. ai, an open-source scientific generative question-answering system with referenced and verified answers.
Sequence-to-Sequence Spanish Pre-trained Language Models
In recent years, significant advancements in pre-trained language models have driven the creation of numerous non-English language variants, with a particular emphasis on encoder-only and decoder-only architectures.
Retrieval-Augmented Generative Question Answering for Event Argument Extraction
We propose a retrieval-augmented generative QA model (R-GQA) for event argument extraction.
General-Purpose Question-Answering with Macaw
Despite the successes of pretrained language models, there are still few high-quality, general-purpose QA systems that are freely available.
KPQA: A Metric for Generative Question Answering Using Keyphrase Weights
To evaluate our metric, we create high-quality human judgments of correctness on two GenQA datasets.
PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation
An extensive set of experiments show that PALM achieves new state-of-the-art results on a variety of language generation benchmarks covering generative question answering (Rank 1 on the official MARCO leaderboard), abstractive summarization on CNN/DailyMail as well as Gigaword, question generation on SQuAD, and conversational response generation on Cornell Movie Dialogues.
ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation
Current pre-training works in natural language generation pay little attention to the problem of exposure bias on downstream tasks.
Unified Language Model Pre-training for Natural Language Understanding and Generation
This paper presents a new Unified pre-trained Language Model (UniLM) that can be fine-tuned for both natural language understanding and generation tasks.
CoQA: A Conversational Question Answering Challenge
Humans gather information by engaging in conversations involving a series of interconnected questions and answers.