Automated Theorem Proving

70 papers with code • 10 benchmarks • 8 datasets

The goal of Automated Theorem Proving is to automatically generate a proof, given a conjecture (the target theorem) and a knowledge base of known facts, all expressed in a formal language. Automated Theorem Proving is useful in a wide range of applications, including the verification and synthesis of software and hardware systems.

Source: Learning to Prove Theorems by Learning to Generate Theorems

Libraries

Use these libraries to find Automated Theorem Proving models and implementations
2 papers
6,581

Most implemented papers

Premise Selection for Theorem Proving by Deep Graph Embedding

princeton-vl/FormulaNet NeurIPS 2017

We propose a deep learning-based approach to the problem of premise selection: selecting mathematical statements relevant for proving a given conjecture.

Automated proof synthesis for propositional logic with deep neural networks

mluszczyk/deepsat 30 May 2018

As an implementation of the estimator, we propose a proposition-to-proof architecture, which is a DNN tailored to the automated proof synthesis problem.

GamePad: A Learning Environment for Theorem Proving

ml4tp/gamepad ICLR 2019

In this paper, we introduce a system called GamePad that can be used to explore the application of machine learning methods to theorem proving in the Coq proof assistant.

Guiding Inferences in Connection Tableau by Recurrent Neural Networks

BartoszPiotrowski/guiding-connection-tableau-by-RNNs 20 May 2019

We present a dataset and experiments on applying recurrent neural networks (RNNs) for guiding clause selection in the connection tableau proof calculus.

Learning to Prove Theorems via Interacting with Proof Assistants

princeton-vl/CoqGym 21 May 2019

Proof assistants offer a formalism that resembles human mathematical reasoning, representing theorems in higher-order logic and proofs as high-level tactics.

Towards Finding Longer Proofs

atpcurr/atpcurr 30 May 2019

We present a reinforcement learning (RL) based guidance system for automated theorem proving geared towards Finding Longer Proofs (FLoP).

Neural Theorem Provers Do Not Learn Rules Without Exploration

Michiel29/ntp-release 17 Jun 2019

Neural symbolic processing aims to combine the generalization of logical learning approaches and the performance of neural networks.

Deep Reinforcement Learning for Synthesizing Functions in Higher-Order Logic

barakeel/synthesis_datasets 25 Oct 2019

We set a precedent for statistically guided synthesis of Diophantine equations by solving 78. 5% of the generated test problems.

G2SAT: Learning to Generate SAT Formulas

JiaxuanYou/G2SAT NeurIPS 2019

The Boolean Satisfiability (SAT) problem is the canonical NP-complete problem and is fundamental to computer science, with a wide array of applications in planning, verification, and theorem proving.

A Deep Reinforcement Learning Approach to First-Order Logic Theorem Proving

IBM/TRAIL 5 Nov 2019

Automated theorem provers have traditionally relied on manually tuned heuristics to guide how they perform proof search.