no code implementations • 24 Jul 2023 • Sören Becker, Michal Klein, Alexander Neitz, Giambattista Parascandolo, Niki Kilbertus
We develop a transformer-based sequence-to-sequence model that recovers scalar ordinary differential equations (ODEs) in symbolic form from irregularly sampled and noisy observations of a single solution trajectory.
no code implementations • 5 Nov 2022 • Sören Becker, Michal Klein, Alexander Neitz, Giambattista Parascandolo, Niki Kilbertus
Natural laws are often described through differential equations yet finding a differential equation that describes the governing law underlying observed data is a challenging and still mostly manual task.
1 code implementation • 13 Sep 2021 • Hsiao-Ru Pan, Nico Gürtler, Alexander Neitz, Bernhard Schölkopf
The predominant approach in reinforcement learning is to assign credit to actions based on the expected return.
1 code implementation • 11 Jun 2021 • Luca Biggio, Tommaso Bendinelli, Alexander Neitz, Aurelien Lucchi, Giambattista Parascandolo
We procedurally generate an unbounded set of equations, and simultaneously pre-train a Transformer to predict the symbolic equation from a corresponding set of input-output-pairs.
no code implementations • ICLR 2021 • Alexander Neitz, Giambattista Parascandolo, Bernhard Schölkopf
By learning to predict trajectories of dynamical systems, model-based methods can make extensive use of all observations from past experience.
no code implementations • 1 Jan 2021 • Giambattista Parascandolo, Lars Holger Buesing, Josh Merel, Leonard Hasenclever, John Aslanides, Jessica B Hamrick, Nicolas Heess, Alexander Neitz, Theophane Weber
are constrained by an implicit sequential planning assumption: The order in which a plan is constructed is the same in which it is executed.
1 code implementation • ICLR 2021 • Ossama Ahmed, Frederik Träuble, Anirudh Goyal, Alexander Neitz, Yoshua Bengio, Bernhard Schölkopf, Manuel Wüthrich, Stefan Bauer
To facilitate research addressing this problem, we propose CausalWorld, a benchmark for causal structure and transfer learning in a robotic manipulation environment.
3 code implementations • ICLR 2021 • Giambattista Parascandolo, Alexander Neitz, Antonio Orvieto, Luigi Gresele, Bernhard Schölkopf
In this paper, we investigate the principle that `good explanations are hard to vary' in the context of deep learning.
no code implementations • 23 Apr 2020 • Giambattista Parascandolo, Lars Buesing, Josh Merel, Leonard Hasenclever, John Aslanides, Jessica B. Hamrick, Nicolas Heess, Alexander Neitz, Theophane Weber
are constrained by an implicit sequential planning assumption: The order in which a plan is constructed is the same in which it is executed.
2 code implementations • NeurIPS 2018 • Alexander Neitz, Giambattista Parascandolo, Stefan Bauer, Bernhard Schölkopf
We introduce a method which enables a recurrent dynamics model to be temporally abstract.