Search Results for author: Markus Holzleitner

Found 12 papers, 8 papers with code

Universal Physics Transformers

no code implementations19 Feb 2024 Benedikt Alkin, Andreas Fürst, Simon Schmid, Lukas Gruber, Markus Holzleitner, Johannes Brandstetter

Deep neural network based surrogates for partial differential equations have recently gained increased interest.

SymbolicAI: A framework for logic-based approaches combining generative models and solvers

2 code implementations1 Feb 2024 Marius-Constantin Dinu, Claudiu Leoveanu-Condrei, Markus Holzleitner, Werner Zellinger, Sepp Hochreiter

We conclude by introducing a quality measure and its empirical score for evaluating these computational graphs, and propose a benchmark that compares various state-of-the-art LLMs across a set of complex workflows.

Few-Shot Learning Probabilistic Programming

On regularized polynomial functional regression

no code implementations6 Nov 2023 Markus Holzleitner, Sergei Pereverzyev

This article offers a comprehensive treatment of polynomial functional regression, culminating in the establishment of a novel finite sample bound.

regression

Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation

1 code implementation2 May 2023 Marius-Constantin Dinu, Markus Holzleitner, Maximilian Beck, Hoan Duc Nguyen, Andrea Huber, Hamid Eghbal-zadeh, Bernhard A. Moser, Sergei Pereverzyev, Sepp Hochreiter, Werner Zellinger

Our method outperforms deep embedded validation (DEV) and importance weighted validation (IWV) on all datasets, setting a new state-of-the-art performance for solving parameter choice issues in unsupervised domain adaptation with theoretical error guarantees.

Unsupervised Domain Adaptation

Domain Generalization by Functional Regression

1 code implementation9 Feb 2023 Markus Holzleitner, Sergei V. Pereverzyev, Werner Zellinger

The problem of domain generalization is to learn, given data from different source distributions, a model that can be expected to generalize well on new target distributions which are only seen through unlabeled samples.

Domain Generalization regression

Few-Shot Learning by Dimensionality Reduction in Gradient Space

1 code implementation7 Jun 2022 Martin Gauch, Maximilian Beck, Thomas Adler, Dmytro Kotsur, Stefan Fiel, Hamid Eghbal-zadeh, Johannes Brandstetter, Johannes Kofler, Markus Holzleitner, Werner Zellinger, Daniel Klotz, Sepp Hochreiter, Sebastian Lehner

We introduce SubGD, a novel few-shot learning method which is based on the recent finding that stochastic gradient descent updates tend to live in a low-dimensional parameter subspace.

Dimensionality Reduction Few-Shot Learning

MC-LSTM: Mass-Conserving LSTM

1 code implementation13 Jan 2021 Pieter-Jan Hoedt, Frederik Kratzert, Daniel Klotz, Christina Halmich, Markus Holzleitner, Grey Nearing, Sepp Hochreiter, Günter Klambauer

MC-LSTMs set a new state-of-the-art for neural arithmetic units at learning arithmetic operations, such as addition tasks, which have a strong conservation law, as the sum is constant over time.

Inductive Bias

Convergence Proof for Actor-Critic Methods Applied to PPO and RUDDER

no code implementations2 Dec 2020 Markus Holzleitner, Lukas Gruber, José Arjona-Medina, Johannes Brandstetter, Sepp Hochreiter

We prove under commonly used assumptions the convergence of actor-critic reinforcement learning algorithms, which simultaneously learn a policy function, the actor, and a value function, the critic.

Reinforcement Learning (RL) valid

Cannot find the paper you are looking for? You can Submit a new open access paper.