Search Results for author: Markus N. Rabe

Found 12 papers, 6 papers with code

Memorizing Transformers

no code implementations ICLR 2022 Yuhuai Wu, Markus N. Rabe, DeLesley Hutchins, Christian Szegedy

Language models typically need to be trained or finetuned in order to acquire new knowledge, which involves updating their weights.

Language Modelling

Self-attention Does Not Need $O(n^2)$ Memory

3 code implementations10 Dec 2021 Markus N. Rabe, Charles Staats

We present a very simple algorithm for attention that requires $O(1)$ memory with respect to sequence length and an extension to self-attention that requires $O(\log n)$ memory.

Neural Circuit Synthesis from Specification Patterns

1 code implementation NeurIPS 2021 Frederik Schmitt, Christopher Hahn, Markus N. Rabe, Bernd Finkbeiner

We train hierarchical Transformers on the task of synthesizing hardware circuits directly out of high-level logical specifications in linear-time temporal logic (LTL).

Teaching Temporal Logics to Neural Networks

1 code implementation ICLR 2021 Christopher Hahn, Frederik Schmitt, Jens U. Kreber, Markus N. Rabe, Bernd Finkbeiner

We study two fundamental questions in neuro-symbolic computing: can deep learning tackle challenging problems in logics end-to-end, and can neural networks learn the semantics of logics.

Mathematical Reasoning in Latent Space

no code implementations ICLR 2020 Dennis Lee, Christian Szegedy, Markus N. Rabe, Sarah M. Loos, Kshitij Bansal

We design and conduct a simple experiment to study whether neural networks can perform several steps of approximate reasoning in a fixed dimensional latent space.

Mathematical Reasoning

Learning to Reason in Large Theories without Imitation

no code implementations25 May 2019 Kshitij Bansal, Christian Szegedy, Markus N. Rabe, Sarah M. Loos, Viktor Toman

Our experiments show that the theorem prover trained with this exploration mechanism outperforms provers that are trained only on human proofs.

Automated Theorem Proving Imitation Learning +1

Learning Heuristics for Automated Reasoning through Reinforcement Learning

no code implementations ICLR 2019 Gil Lederman, Markus N. Rabe, Edward A. Lee, Sanjit A. Seshia

We demonstrate how to learn efficient heuristics for automated reasoning algorithms through deep reinforcement learning.

reinforcement-learning

A Model Counter's Guide to Probabilistic Systems

no code implementations22 Mar 2019 Marcell Vazquez-Chanlatte, Markus N. Rabe, Sanjit A. Seshia

In this paper, we systematize the modeling of probabilistic systems for the purpose of analyzing them with model counting techniques.

Learning Heuristics for Quantified Boolean Formulas through Deep Reinforcement Learning

1 code implementation20 Jul 2018 Gil Lederman, Markus N. Rabe, Edward A. Lee, Sanjit A. Seshia

We demonstrate how to learn efficient heuristics for automated reasoning algorithms for quantified Boolean formulas through deep reinforcement learning.

reinforcement-learning

Temporal Logics for Hyperproperties

1 code implementation17 Jan 2014 Michael R. Clarkson, Bernd Finkbeiner, Masoud Koleini, Kristopher K. Micinski, Markus N. Rabe, César Sánchez

Standard temporal logics such as LTL, CTL, and CTL* can refer only to a single path at a time, hence cannot express many hyperproperties of interest.

Logic in Computer Science

Cannot find the paper you are looking for? You can Submit a new open access paper.