Conversational Question Answering
35 papers with code • 0 benchmarks • 6 datasets
These leaderboards are used to track progress in Conversational Question Answering
Conversational question answering (CQA) is a novel QA task that requires understanding of dialogue context.
First, we propose a positional history answer embedding method to encode conversation history with position information using BERT in a natural way.
The literature has witnessed the success of leveraging Pre-trained Language Models (PLMs) and Transfer Learning (TL) algorithms to a wide range of Natural Language Processing (NLP) applications, yet it is not easy to build an easy-to-use and scalable TL toolkit for this purpose.
In this work, we conduct the first large-scale human evaluation of state-of-the-art conversational QA systems, where human evaluators converse with models and judge the correctness of their answers.
We present an approach to map utterances in conversation to logical forms, which will be executed on a large-scale knowledge base.
One of the major challenges to multi-turn conversational search is to model the conversation history to answer the current question.
However, to best of our knowledge, two important questions for conversational comprehension research have not been well studied: 1) How well can the benchmark dataset reflect models' content understanding?