Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding

Visually-situated language is ubiquitous -- sources range from textbooks with diagrams to web pages with images and tables, to mobile apps with buttons and forms. Perhaps due to this diversity, previous work has typically relied on domain-specific recipes with limited sharing of the underlying data, model architectures, and objectives. We present Pix2Struct, a pretrained image-to-text model for purely visual language understanding, which can be finetuned on tasks containing visually-situated language. Pix2Struct is pretrained by learning to parse masked screenshots of web pages into simplified HTML. The web, with its richness of visual elements cleanly reflected in the HTML structure, provides a large source of pretraining data well suited to the diversity of downstream tasks. Intuitively, this objective subsumes common pretraining signals such as OCR, language modeling, image captioning. In addition to the novel pretraining strategy, we introduce a variable-resolution input representation and a more flexible integration of language and vision inputs, where language prompts such as questions are rendered directly on top of the input image. For the first time, we show that a single pretrained model can achieve state-of-the-art results in six out of nine tasks across four domains: documents, illustrations, user interfaces, and natural images.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Chart Question Answering ChartQA Pix2Struct-large 1:1 Accuracy 58.6 # 20
Chart Question Answering ChartQA Pix2Struct-base 1:1 Accuracy 56.0 # 21
Visual Question Answering (VQA) DocVQA test Pix2Struct-large ANLS 0.766 # 25
Visual Question Answering (VQA) DocVQA test Pix2Struct-base ANLS 0.721 # 27
Visual Question Answering (VQA) InfographicVQA Pix2Struct-large ANLS 40 # 17
Visual Question Answering (VQA) InfographicVQA Pix2Struct-base ANLS 38.2 # 18


No methods listed for this paper. Add relevant methods here