Conversational Question Answering

62 papers with code • 1 benchmarks • 9 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

SDNet: Contextualized Attention-based Deep Network for Conversational Question Answering

Microsoft/SDNet 10 Dec 2018

Conversational question answering (CQA) is a novel QA task that requires understanding of dialogue context.

CoQA: A Conversational Question Answering Challenge

stanfordnlp/coqa-baselines TACL 2019

Humans gather information by engaging in conversations involving a series of interconnected questions and answers.

PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable

PaddlePaddle/Research ACL 2020

Pre-training models have been proved effective for a wide range of natural language processing tasks.

Attentive History Selection for Conversational Question Answering

prdwb/attentive_history_selection 26 Aug 2019

First, we propose a positional history answer embedding method to encode conversation history with position information using BERT in a natural way.

Open-Domain Question Answering Goes Conversational via Question Rewriting

apple/ml-qrecc NAACL 2021

We introduce a new dataset for Question Rewriting in Conversational Context (QReCC), which contains 14K conversations with 80K question-answer pairs.

EasyTransfer -- A Simple and Scalable Deep Transfer Learning Platform for NLP Applications

alibaba/EasyNLP 18 Nov 2020

The literature has witnessed the success of leveraging Pre-trained Language Models (PLMs) and Transfer Learning (TL) algorithms to a wide range of Natural Language Processing (NLP) applications, yet it is not easy to build an easy-to-use and scalable TL toolkit for this purpose.

Ditch the Gold Standard: Re-evaluating Conversational Question Answering

princeton-nlp/evalconvqa ACL 2022

In this work, we conduct the first large-scale human evaluation of state-of-the-art conversational QA systems, where human evaluators converse with models and judge the correctness of their answers.

APOLLO: An Optimized Training Approach for Long-form Numerical Reasoning

gasolsun36/apollo 14 Dec 2022

For the retriever, we adopt a number-aware negative sampling strategy to enable the retriever to be more discriminative on key numerical facts.

KL-Divergence Guided Temperature Sampling

google-research/google-research 2 Jun 2023

One common approach to mitigate hallucinations is to provide source/grounding documents and the model is trained to produce predictions that bind to and are attributable to the provided source.

PIXIU: A Large Language Model, Instruction Data and Evaluation Benchmark for Finance

chancefocus/pixiu 8 Jun 2023

This paper introduces PIXIU, a comprehensive framework including the first financial LLM based on fine-tuning LLaMA with instruction data, the first instruction data with 136K data samples to support the fine-tuning, and an evaluation benchmark with 5 tasks and 9 datasets.