no code implementations • 13 Feb 2024 • Mohamed Ghanem, Frederik Schmitt, Julian Siber, Bernd Finkbeiner
We introduce NeuRes, a neuro-symbolic proof-based SAT solver.
1 code implementation • 22 Jan 2024 • Matthias Cosler, Christopher Hahn, Ayham Omar, Frederik Schmitt
At the core of the solver lies a seamless integration of neural and symbolic approaches to solving the reactive synthesis problem.
no code implementations • 8 Mar 2023 • Matthias Cosler, Christopher Hahn, Daniel Mendoza, Frederik Schmitt, Caroline Trippel
A rigorous formalization of desired system requirements is indispensable when performing any verification task.
1 code implementation • 2 Mar 2023 • Matthias Cosler, Frederik Schmitt, Christopher Hahn, Bernd Finkbeiner
We propose a separated hierarchical Transformer for multimodal representation learning of the formal specification and the circuit.
no code implementations • 4 Jun 2022 • Christopher Hahn, Frederik Schmitt, Julia J. Tillman, Niklas Metzger, Julian Siber, Bernd Finkbeiner
We study the generalization abilities of language models when translating natural language into formal specifications with complex semantics.
1 code implementation • 30 May 2022 • Niklas Metzger, Christopher Hahn, Julian Siber, Frederik Schmitt, Bernd Finkbeiner
In this paper, we study the computation of how much an input token in a Transformer model influences its prediction.
1 code implementation • NeurIPS 2021 • Frederik Schmitt, Christopher Hahn, Markus N. Rabe, Bernd Finkbeiner
We train hierarchical Transformers on the task of synthesizing hardware circuits directly out of high-level logical specifications in linear-time temporal logic (LTL).
2 code implementations • ICLR 2021 • Christopher Hahn, Frederik Schmitt, Jens U. Kreber, Markus N. Rabe, Bernd Finkbeiner
We study two fundamental questions in neuro-symbolic computing: can deep learning tackle challenging problems in logics end-to-end, and can neural networks learn the semantics of logics.