Browse state-of-the-art
Follow
Discuss
About
Log In/Register
Get the weekly digest
×
You'll get the lates papers with code and state-of-the-art methods.
Tip: you can also
follow us on
Twitter
Join the community
×
You need to
log in
to edit.
You can
create a new account
if you don't have one.
Or, discuss a change on
Slack
.
Add a new evaluation result row
×
Paper title:
*
---------
Model name:
*
Metric name:
*
---------
Higher is better (for the metric)
Metric value:
*
Add a new evaluation result row
×
Rank
Method
In-domain
Out-of-domain
Overall
Paper
Year-Month
Remove
1
BERT Large Augmented (single model)
82.5
77.6
81.1
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
2018-10
-
2
SDNet (ensemble)
80.7
75.9
79.3
SDNet: Contextualized Attention-based Deep Network for Conversational Question Answering
2018-12
-
3
BERT-base finetune (single model)
79.8
74.1
78.1
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
2018-10
-
4
SDNet (single model)
78.0
73.1
76.6
SDNet: Contextualized Attention-based Deep Network for Conversational Question Answering
2018-12
-
5
FlowQA (single model)
76.3
71.8
75.0
FlowQA: Grasping Flow in History for Conversational Machine Comprehension
2018-10
-
6
BiDAF++ (single model)
69.4
63.8
67.8
A Qualitative Comparison of CoQA, SQuAD 2.0 and QuAC
2018-9
-
7
DrQA + seq2seq with copy attention (single model)
67.0
60.4
65.1
CoQA: A Conversational Question Answering Challenge
2018-8
-
8
Vanilla DrQA (single model)
54.5
47.9
52.6
CoQA: A Conversational Question Answering Challenge
2018-8
-
Browse
>
Natural Language Processing
>
Question Answering
> CoQA dataset
Question Answering on CoQA
Edit
Add
Remove
Rank
Method
In-domain
Out-of-domain
Overall
Paper title
Year
Paper
Code
1
BERT Large Augmented (single model)
82.5
77.6
81.1
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
2018
2
SDNet (ensemble)
80.7
75.9
79.3
SDNet: Contextualized Attention-based Deep Network for Conversational Question Answering
2018
3
BERT-base finetune (single model)
79.8
74.1
78.1
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
2018
4
SDNet (single model)
78.0
73.1
76.6
SDNet: Contextualized Attention-based Deep Network for Conversational Question Answering
2018
5
FlowQA (single model)
76.3
71.8
75.0
FlowQA: Grasping Flow in History for Conversational Machine Comprehension
2018
6
BiDAF++ (single model)
69.4
63.8
67.8
A Qualitative Comparison of CoQA, SQuAD 2.0 and QuAC
2018
7
DrQA + seq2seq with copy attention (single model)
67.0
60.4
65.1
CoQA: A Conversational Question Answering Challenge
2018
8
Vanilla DrQA (single model)
54.5
47.9
52.6
CoQA: A Conversational Question Answering Challenge
2018