Mathematical Question Answering

7 papers with code • 2 benchmarks • 7 datasets

Building systems that automatically answer mathematical questions.

Most implemented papers

Analysing Mathematical Reasoning Abilities of Neural Models

deepmind/mathematics_dataset ICLR 2019

The structured nature of the mathematics domain, covering arithmetic, algebra, probability and calculus, enables the construction of training and test splits designed to clearly illuminate the capabilities and failure-modes of different architectures, as well as evaluate their ability to compose and relate knowledge and learned processes.

Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning

lupantech/InterGPS ACL 2021

We further propose a novel geometry solving approach with formal language and symbolic reasoning, called Interpretable Geometry Problem Solver (Inter-GPS).

IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning

lupantech/iconqa 25 Oct 2021

Also, we develop a strong IconQA baseline Patch-TRM that applies a pyramid cross-modal Transformer with input diagram embeddings pre-trained on the icon dataset.

Plane Geometry Diagram Parsing

mingliangzhang2018/PGDP 19 May 2022

Geometry diagram parsing plays a key role in geometry problem solving, wherein the primitive extraction and relation parsing remain challenging due to the complex layout and between-primitive relationship.

Mining Mathematical Documents for Question Answering via Unsupervised Formula Labeling

gipplab/MathQA 12 Nov 2022

In this paper, we aim to bridge the gap by presenting data mining methods and benchmark results to employ Mathematical Entity Linking (MathEL) and Unsupervised Formula Labeling (UFL) for semantic formula search and mathematical question answering (MathQA) on the arXiv preprint repository, Wikipedia, and Wikidata, which is part of the Wikimedia ecosystem of free knowledge.

Speak Like a Native: Prompting Large Language Models in a Native Style

yangzhch6/alignedcot 22 Nov 2023

Specifically, with AlignedCoT, we observe an average +3. 2\% improvement for \texttt{gpt-3. 5-turbo} compared to the carefully handcrafted CoT on multi-step reasoning benchmarks. Furthermore, we use AlignedCoT to rewrite the CoT text style in the training set, which improves the performance of Retrieval Augmented Generation by 3. 6\%. The source code and dataset is available at https://github. com/yangzhch6/AlignedCoT