DocVQA: A Dataset for VQA on Document Images

1 Jul 2020  ·  Minesh Mathew, Dimosthenis Karatzas, C. V. Jawahar ·

We present a new dataset for Visual Question Answering (VQA) on document images called DocVQA. The dataset consists of 50,000 questions defined on 12,000+ document images. Detailed analysis of the dataset in comparison with similar datasets for VQA and reading comprehension is presented. We report several baseline results by adopting existing VQA and reading comprehension models. Although the existing models perform reasonably well on certain types of questions, there is large performance gap compared to human performance (94.36% accuracy). The models need to improve specifically on questions where understanding structure of the document is crucial. The dataset, code and leaderboard are available at docvqa.org

PDF Abstract

Results from the Paper


 Ranked #1 on Visual Question Answering (VQA) on DocVQA test (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Visual Question Answering (VQA) DocVQA test Human ANLS 0.9436 # 1
Visual Question Answering (VQA) DocVQA test BERT_LARGE_SQUAD_DOCVQA_FINETUNED_Baseline ANLS 0.665 # 29
Accuracy 55.77 # 1
Visual Question Answering (VQA) DocVQA val đm bk bk lôn 0.655 # 1
Visual Question Answering (VQA) DocVQA val BERT LARGE Baseline Accuracy 54.48 # 1

Methods