MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering

Visual language data such as plots, charts, and infographics are ubiquitous in the human world. However, state-of-the-art vision-language models do not perform well on these data. We propose MatCha (Math reasoning and Chart derendering pretraining) to enhance visual language models' capabilities in jointly modeling charts/plots and language data. Specifically, we propose several pretraining tasks that cover plot deconstruction and numerical reasoning which are the key capabilities in visual language modeling. We perform the MatCha pretraining starting from Pix2Struct, a recently proposed image-to-text visual language model. On standard benchmarks such as PlotQA and ChartQA, the MatCha model outperforms state-of-the-art methods by as much as nearly 20%. We also examine how well MatCha pretraining transfers to domains such as screenshots, textbook diagrams, and document figures and observe overall improvement, verifying the usefulness of MatCha pretraining on broader visual language tasks.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Chart Question Answering ChartQA MatCha 1:1 Accuracy 64.2 # 20
Visual Question Answering (VQA) DocVQA test MatCha ANLS 0.742 # 26
Visual Question Answering (VQA) InfographicVQA MatCha ANLS 37.2 # 19
Chart Question Answering PlotQA MatCha 1:1 Accuracy 91.5 # 1
Visual Question Answering PlotQA-D1 MatCha 1:1 Accuracy 92.3 # 1
Visual Question Answering PlotQA-D2 MatCha 1:1 Accuracy 90.7 # 1
Chart Question Answering RealCQA Matcha-chartQA 1:1 Accuracy 0.259728175283818 # 4

Methods


No methods listed for this paper. Add relevant methods here