DUBLIN -- Document Understanding By Language-Image Network

Visual document understanding is a complex task that involves analyzing both the text and the visual elements in document images. Existing models often rely on manual feature engineering or domain-specific pipelines, which limit their generalization ability across different document types and languages. In this paper, we propose DUBLIN, which is pretrained on web pages using three novel objectives: Masked Document Text Generation Task, Bounding Box Task, and Rendered Question Answering Task, that leverage both the spatial and semantic information in the document images. Our model achieves competitive or state-of-the-art results on several benchmarks, such as Web-Based Structural Reading Comprehension, Document Visual Question Answering, Key Information Extraction, Diagram Understanding, and Table Question Answering. In particular, we show that DUBLIN is the first pixel-based model to achieve an EM of 77.75 and F1 of 84.25 on the WebSRC dataset. We also show that our model outperforms the current pixel-based SOTA models on DocVQA, InfographicsVQA, OCR-VQA and AI2D datasets by 4.6%, 6.5%, 2.6% and 21%, respectively. We also achieve competitive performance on RVL-CDIP document classification. Moreover, we create new baselines for text-based datasets by rendering them as document images to promote research in this direction.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Visual Question Answering (VQA) AI2D DUBLIN EM 51.11 # 4
Visual Question Answering (VQA) DeepForm DUBLIN F1 62.23 # 1
Visual Question Answering (VQA) DocVQA test DUBLIN ANLS 0.782 # 23
Visual Question Answering (VQA) DocVQA test DUBLIN (variable resolution) ANLS 0.803 # 21
Visual Question Answering (VQA) InfographicVQA DUBLIN (variable resolution) ANLS 42.6 # 16
Visual Question Answering (VQA) InfographicVQA DUBLIN ANLS 36.82 # 20
Visual Question Answering (VQA) WebSRC DUBLIN EM 77.75 # 1


No methods listed for this paper. Add relevant methods here