Automated Theorem Proving
57 papers with code • 10 benchmarks • 8 datasets
The goal of Automated Theorem Proving is to automatically generate a proof, given a conjecture (the target theorem) and a knowledge base of known facts, all expressed in a formal language. Automated Theorem Proving is useful in a wide range of applications, including the verification and synthesis of software and hardware systems.
Source: Learning to Prove Theorems by Learning to Generate Theorems
Most implemented papers
Holophrasm: a neural Automated Theorem Prover for higher-order logic
I propose a system for Automated Theorem Proving in higher order logic using deep learning and eschewing hand-constructed features.
HOList: An Environment for Machine Learning of Higher-Order Theorem Proving
We present an environment, benchmark, and deep learning driven automated theorem prover for higher-order logic.
Proof Artifact Co-training for Theorem Proving with Language Models
Labeled data for imitation learning of theorem proving in large libraries of formalized mathematics is scarce as such libraries require years of concentrated effort by human specialists to be built.
DeepMath - Deep Sequence Models for Premise Selection
We study the effectiveness of neural sequence models for premise selection in automated theorem proving, one of the main bottlenecks in the formalization of mathematics.
Measuring Systematic Generalization in Neural Proof Generation with Transformers
We observe that models that are not trained to generate proofs are better at generalizing to problems based on longer proofs.
Learning Maximally Monotone Operators for Image Recovery
Recently, several works have proposed to replace the operator related to the regularization by a more sophisticated denoiser.
Learning to Match Mathematical Statements with Proofs
The task is designed to improve the processing of research-level mathematical texts.
MiniF2F: a cross-system benchmark for formal Olympiad-level mathematics
We present miniF2F, a dataset of formal Olympiad-level mathematics problems statements intended to provide a unified cross-system benchmark for neural theorem proving.
Learning Symbolic Rules for Reasoning in Quasi-Natural Language
In this work, we ask how we can build a rule-based system that can reason with natural language input but without the manual construction of rules.
Draft, Sketch, and Prove: Guiding Formal Theorem Provers with Informal Proofs
In this work, we introduce Draft, Sketch, and Prove (DSP), a method that maps informal proofs to formal proof sketches, and uses the sketches to guide an automated prover by directing its search to easier sub-problems.