We present a very simple algorithm for attention that requires $O(1)$ memory with respect to sequence length and an extension to self-attention that requires $O(\log n)$ memory.
We train hierarchical Transformers on the task of synthesizing hardware circuits directly out of high-level logical specifications in linear-time temporal logic (LTL).
We examine whether self-supervised language modeling applied to mathematical formulas enables logical reasoning.
We study two fundamental questions in neuro-symbolic computing: can deep learning tackle challenging problems in logics end-to-end, and can neural networks learn the semantics of logics.
We design and conduct a simple experiment to study whether neural networks can perform several steps of approximate reasoning in a fixed dimensional latent space.
Our experiments show that the theorem prover trained with this exploration mechanism outperforms provers that are trained only on human proofs.
Ranked #3 on Automated Theorem Proving on HOList benchmark
We demonstrate how to learn efficient heuristics for automated reasoning algorithms through deep reinforcement learning.
We present an environment, benchmark, and deep learning driven automated theorem prover for higher-order logic.
Ranked #2 on Automated Theorem Proving on HOList benchmark
We demonstrate how to learn efficient heuristics for automated reasoning algorithms for quantified Boolean formulas through deep reinforcement learning.
Standard temporal logics such as LTL, CTL, and CTL* can refer only to a single path at a time, hence cannot express many hyperproperties of interest.
Logic in Computer Science