# Mathematical Reasoning

28 papers with code • 3 benchmarks • 5 datasets

## Subtasks

## Most implemented papers

# Analysing Mathematical Reasoning Abilities of Neural Models

The structured nature of the mathematics domain, covering arithmetic, algebra, probability and calculus, enables the construction of training and test splits designed to clearly illuminate the capabilities and failure-modes of different architectures, as well as evaluate their ability to compose and relate knowledge and learned processes.

# Compositional Generalization with Tree Stack Memory Units

We study compositional generalization, viz., the problem of zero-shot generalization to novel compositions of concepts in a domain.

# Measuring Mathematical Problem Solving With the MATH Dataset

To facilitate future research and increase accuracy on MATH, we also contribute a large auxiliary pretraining dataset which helps teach models the fundamentals of mathematics.

# Training Verifiers to Solve Math Word Problems

State-of-the-art language models can match human performance on many tasks, but they still struggle to robustly perform multi-step mathematical reasoning.

# Reasoning with Language Model Prompting: A Survey

Reasoning, as an essential ability for complex problem-solving, can provide back-end support for various real-world applications, such as medical diagnosis, negotiation, etc.

# Learning to Prove Theorems via Interacting with Proof Assistants

Proof assistants offer a formalism that resembles human mathematical reasoning, representing theorems in higher-order logic and proofs as high-level tactics.

# IsarStep: a Benchmark for High-level Mathematical Reasoning

In this paper, we present a benchmark for high-level mathematical reasoning and study the reasoning capabilities of neural sequence-to-sequence models.

# DRLE: Decentralized Reinforcement Learning at the Edge for Traffic Light Control in the IoV

To this end, we propose a Decentralized Reinforcement Learning at the Edge for traffic light control in the IoV (DRLE).

# Reverse Operation based Data Augmentation for Solving Math Word Problems

Automatically solving math word problems is a critical task in the field of natural language processing.

# LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning

While designing inductive bias in neural architectures has been widely studied, we hypothesize that transformer networks are flexible enough to learn inductive bias from suitable generic tasks.