Search Results for author: Alexander Neitz

Found 10 papers, 5 papers with code

Predicting Ordinary Differential Equations with Transformers

no code implementations24 Jul 2023 Sören Becker, Michal Klein, Alexander Neitz, Giambattista Parascandolo, Niki Kilbertus

We develop a transformer-based sequence-to-sequence model that recovers scalar ordinary differential equations (ODEs) in symbolic form from irregularly sampled and noisy observations of a single solution trajectory.

Discovering ordinary differential equations that govern time-series

no code implementations5 Nov 2022 Sören Becker, Michal Klein, Alexander Neitz, Giambattista Parascandolo, Niki Kilbertus

Natural laws are often described through differential equations yet finding a differential equation that describes the governing law underlying observed data is a challenging and still mostly manual task.

Time Series Time Series Analysis

Direct Advantage Estimation

1 code implementation13 Sep 2021 Hsiao-Ru Pan, Nico Gürtler, Alexander Neitz, Bernhard Schölkopf

The predominant approach in reinforcement learning is to assign credit to actions based on the expected return.

Neural Symbolic Regression that Scales

1 code implementation11 Jun 2021 Luca Biggio, Tommaso Bendinelli, Alexander Neitz, Aurelien Lucchi, Giambattista Parascandolo

We procedurally generate an unbounded set of equations, and simultaneously pre-train a Transformer to predict the symbolic equation from a corresponding set of input-output-pairs.

regression Symbolic Regression

Learning to interpret trajectories

no code implementations ICLR 2021 Alexander Neitz, Giambattista Parascandolo, Bernhard Schölkopf

By learning to predict trajectories of dynamical systems, model-based methods can make extensive use of all observations from past experience.

Learning explanations that are hard to vary

3 code implementations ICLR 2021 Giambattista Parascandolo, Alexander Neitz, Antonio Orvieto, Luigi Gresele, Bernhard Schölkopf

In this paper, we investigate the principle that `good explanations are hard to vary' in the context of deep learning.

Memorization

Cannot find the paper you are looking for? You can Submit a new open access paper.