DePlot: One-shot visual language reasoning by plot-to-table translation

Visual language such as charts and plots is ubiquitous in the human world. Comprehending plots and charts requires strong reasoning skills. Prior state-of-the-art (SOTA) models require at least tens of thousands of training examples and their reasoning capabilities are still much limited, especially on complex human-written queries. This paper presents the first one-shot solution to visual language reasoning. We decompose the challenge of visual language reasoning into two steps: (1) plot-to-text translation, and (2) reasoning over the translated text. The key in this method is a modality conversion module, named as DePlot, which translates the image of a plot or chart to a linearized table. The output of DePlot can then be directly used to prompt a pretrained large language model (LLM), exploiting the few-shot reasoning capabilities of LLMs. To obtain DePlot, we standardize the plot-to-table task by establishing unified task formats and metrics, and train DePlot end-to-end on this task. DePlot can then be used off-the-shelf together with LLMs in a plug-and-play fashion. Compared with a SOTA model finetuned on more than >28k data points, DePlot+LLM with just one-shot prompting achieves a 24.0% improvement over finetuned SOTA on human-written queries from the task of chart QA.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Chart Question Answering ChartQA DePlot+FlanPaLM (Self-Consistency) 1:1 Accuracy 70.5 # 10
Chart Question Answering ChartQA DePlot+GPT3 (CoT) 1:1 Accuracy 36.9 # 24
Chart Question Answering ChartQA DePlot+GPT3 (Self-Consistency) 1:1 Accuracy 42.3 # 23
Chart Question Answering ChartQA DePlot+FlanPaLM (CoT) 1:1 Accuracy 67.3 # 13
Chart Question Answering ChartQA DePlot+Codex (PoT Self-Consistency) 1:1 Accuracy 76.7 # 3
Chart Question Answering ChartQA DePlot+FlanPaLM+Codex (PoT Self-Consistency) 1:1 Accuracy 79.3 # 2
Factual Inconsistency Detection in Chart Captioning CHOCOLATE-FT DePlot + GPT-4 Kendall's Tau-c 0.109 # 5
Factual Inconsistency Detection in Chart Captioning CHOCOLATE-LLM DePlot + GPT-4 Kendall's Tau-c 0.117 # 2
Factual Inconsistency Detection in Chart Captioning CHOCOLATE-LVLM DePlot + GPT-4 Kendall's Tau-c 0.129 # 3
Chart Question Answering PlotQA DePlot+FlanPaLM+Codex (PoT Self-Consistency) 1:1 Accuracy 66.6 # 2

Methods


No methods listed for this paper. Add relevant methods here