Math Word Problem Solving

80 papers with code • 13 benchmarks • 23 datasets

A math word problem is a mathematical exercise (such as in a textbook, worksheet, or exam) where significant background information on the problem is presented in ordinary language rather than in mathematical notation. As most word problems involve a narrative of some sort, they are sometimes referred to as story problems and may vary in the amount of technical language used.

Libraries

Use these libraries to find Math Word Problem Solving models and implementations

Most implemented papers

LLaMA: Open and Efficient Foundation Language Models

facebookresearch/llama arXiv 2023

We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters.

Llama 2: Open Foundation and Fine-Tuned Chat Models

facebookresearch/llama 18 Jul 2023

In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters.

DeBERTa: Decoding-enhanced BERT with Disentangled Attention

microsoft/DeBERTa ICLR 2021

Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks.

Analysing Mathematical Reasoning Abilities of Neural Models

deepmind/mathematics_dataset ICLR 2019

The structured nature of the mathematics domain, covering arithmetic, algebra, probability and calculus, enables the construction of training and test splits designed to clearly illuminate the capabilities and failure-modes of different architectures, as well as evaluate their ability to compose and relate knowledge and learned processes.

Mistral 7B

mistralai/mistral-src 10 Oct 2023

We introduce Mistral 7B v0. 1, a 7-billion-parameter language model engineered for superior performance and efficiency.

Mixtral of Experts

jingyaogong/minimind 8 Jan 2024

In particular, Mixtral vastly outperforms Llama 2 70B on mathematics, code generation, and multilingual benchmarks.

Qwen2 Technical Report

qwenlm/qwen2 15 Jul 2024

This report introduces the Qwen2 series, the latest addition to our large language models and large multimodal models.

Measuring Mathematical Problem Solving With the MATH Dataset

hendrycks/math 5 Mar 2021

To facilitate future research and increase accuracy on MATH, we also contribute a large auxiliary pretraining dataset which helps teach models the fundamentals of mathematics.

DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models

deepseek-ai/deepseek-math 5 Feb 2024

Mathematical reasoning poses a significant challenge for language models due to its complex and structured nature.

Large Language Models are Zero-Shot Reasoners

kojima-takeshi188/zero_shot_cot 24 May 2022

Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars.