Are Language Models Puzzle Prodigies? Algorithmic Puzzles Unveil Serious Challenges in Multimodal Reasoning

6 Mar 2024  ·  Deepanway Ghosal, Vernon Toh Yan Han, Chia Yew Ken, Soujanya Poria ·

This paper introduces the novel task of multimodal puzzle solving, framed within the context of visual question-answering. We present a new dataset, AlgoPuzzleVQA designed to challenge and evaluate the capabilities of multimodal language models in solving algorithmic puzzles that necessitate both visual understanding, language understanding, and complex algorithmic reasoning. We create the puzzles to encompass a diverse array of mathematical and algorithmic topics such as boolean logic, combinatorics, graph theory, optimization, search, etc., aiming to evaluate the gap between visual data interpretation and algorithmic problem-solving skills. The dataset is generated automatically from code authored by humans. All our puzzles have exact solutions that can be found from the algorithm without tedious human calculations. It ensures that our dataset can be scaled up arbitrarily in terms of reasoning complexity and dataset size. Our investigation reveals that large language models (LLMs) such as GPT4V and Gemini exhibit limited performance in puzzle-solving tasks. We find that their performance is near random in a multi-choice question-answering setup for a significant number of puzzles. The findings emphasize the challenges of integrating visual, language, and algorithmic knowledge for solving complex reasoning problems.

PDF Abstract

Datasets


Introduced in the Paper:

AlgoPuzzleVQA

Used in the Paper:

ScienceQA

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Multimodal Reasoning AlgoPuzzleVQA GPT-4 Acc 30.3 # 1

Methods


No methods listed for this paper. Add relevant methods here