Automated Theorem Proving

66 papers with code • 10 benchmarks • 8 datasets

The goal of Automated Theorem Proving is to automatically generate a proof, given a conjecture (the target theorem) and a knowledge base of known facts, all expressed in a formal language. Automated Theorem Proving is useful in a wide range of applications, including the verification and synthesis of software and hardware systems.

Source: Learning to Prove Theorems by Learning to Generate Theorems

Libraries

Use these libraries to find Automated Theorem Proving models and implementations
2 papers
6,434

Most implemented papers

Holophrasm: a neural Automated Theorem Prover for higher-order logic

dwhalen/holophrasm 8 Aug 2016

I propose a system for Automated Theorem Proving in higher order logic using deep learning and eschewing hand-constructed features.

Proof Artifact Co-training for Theorem Proving with Language Models

jesse-michael-han/lean-step-public ICLR 2022

Labeled data for imitation learning of theorem proving in large libraries of formalized mathematics is scarce as such libraries require years of concentrated effort by human specialists to be built.

Llemma: An Open Language Model For Mathematics

eleutherai/gpt-neox 16 Oct 2023

We present Llemma, a large language model for mathematics.

HOList: An Environment for Machine Learning of Higher-Order Theorem Proving

tensorflow/deepmath 5 Apr 2019

We present an environment, benchmark, and deep learning driven automated theorem prover for higher-order logic.

MiniF2F: a cross-system benchmark for formal Olympiad-level mathematics

openai/minif2f ICLR 2022

We present miniF2F, a dataset of formal Olympiad-level mathematics problems statements intended to provide a unified cross-system benchmark for neural theorem proving.

Draft, Sketch, and Prove: Guiding Formal Theorem Provers with Informal Proofs

facebookresearch/minif2f 21 Oct 2022

In this work, we introduce Draft, Sketch, and Prove (DSP), a method that maps informal proofs to formal proof sketches, and uses the sketches to guide an automated prover by directing its search to easier sub-problems.

LeanDojo: Theorem Proving with Retrieval-Augmented Language Models

lean-dojo/leandojo NeurIPS 2023

Using this data, we develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented with retrieval for selecting premises from a vast math library.

DeepMath - Deep Sequence Models for Premise Selection

JUrban/deepmath NeurIPS 2016

We study the effectiveness of neural sequence models for premise selection in automated theorem proving, one of the main bottlenecks in the formalization of mathematics.

Learning to Prove Theorems by Learning to Generate Theorems

princeton-vl/MetaGen NeurIPS 2020

We consider the task of automated theorem proving, a key AI task.

Measuring Systematic Generalization in Neural Proof Generation with Transformers

NicolasAG/SGinPG NeurIPS 2020

We observe that models that are not trained to generate proofs are better at generalizing to problems based on longer proofs.