Search Results for author: Maximilian Beck

Found 4 papers, 4 papers with code

Vision-LSTM: xLSTM as Generic Vision Backbone

1 code implementation6 Jun 2024 Benedikt Alkin, Maximilian Beck, Korbinian Pöppel, Sepp Hochreiter, Johannes Brandstetter

Transformers are widely used as generic backbones in computer vision, despite initially introduced for natural language processing.

xLSTM: Extended Long Short-Term Memory

1 code implementation7 May 2024 Maximilian Beck, Korbinian Pöppel, Markus Spanring, Andreas Auer, Oleksandra Prudnikova, Michael Kopp, Günter Klambauer, Johannes Brandstetter, Sepp Hochreiter

In the 1990s, the constant error carousel and gating were introduced as the central ideas of the Long Short-Term Memory (LSTM).

Language Modelling

Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation

1 code implementation2 May 2023 Marius-Constantin Dinu, Markus Holzleitner, Maximilian Beck, Hoan Duc Nguyen, Andrea Huber, Hamid Eghbal-zadeh, Bernhard A. Moser, Sergei Pereverzyev, Sepp Hochreiter, Werner Zellinger

Our method outperforms deep embedded validation (DEV) and importance weighted validation (IWV) on all datasets, setting a new state-of-the-art performance for solving parameter choice issues in unsupervised domain adaptation with theoretical error guarantees.

Unsupervised Domain Adaptation

Few-Shot Learning by Dimensionality Reduction in Gradient Space

1 code implementation7 Jun 2022 Martin Gauch, Maximilian Beck, Thomas Adler, Dmytro Kotsur, Stefan Fiel, Hamid Eghbal-zadeh, Johannes Brandstetter, Johannes Kofler, Markus Holzleitner, Werner Zellinger, Daniel Klotz, Sepp Hochreiter, Sebastian Lehner

We introduce SubGD, a novel few-shot learning method which is based on the recent finding that stochastic gradient descent updates tend to live in a low-dimensional parameter subspace.

Dimensionality Reduction Few-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.