no code implementations • 17 Feb 2023 • Bernhard Schäfl, Lukas Gruber, Johannes Brandstetter, Sepp Hochreiter
Graph neural networks (GNNs) have evolved into one of the most popular deep learning architectures.
1 code implementation • 1 Jun 2022 • Bernhard Schäfl, Lukas Gruber, Angela Bitto-Nemling, Sepp Hochreiter
In experiments on small-sized tabular datasets with less than 1, 000 samples, Hopular surpasses Gradient Boosting, Random Forests, SVMs, and in particular several Deep Learning methods.
Ranked #1 on General Classification on Shrutime
2 code implementations • ICLR 2021 • Hubert Ramsauer, Bernhard Schäfl, Johannes Lehner, Philipp Seidl, Michael Widrich, Thomas Adler, Lukas Gruber, Markus Holzleitner, Milena Pavlović, Geir Kjetil Sandve, Victor Greiff, David Kreil, Michael Kopp, Günter Klambauer, Johannes Brandstetter, Sepp Hochreiter
The new update rule is equivalent to the attention mechanism used in transformers.
Immune Repertoire Classification Multiple Instance Learning +1
1 code implementation • NeurIPS 2020 • Michael Widrich, Bernhard Schäfl, Hubert Ramsauer, Milena Pavlović, Lukas Gruber, Markus Holzleitner, Johannes Brandstetter, Geir Kjetil Sandve, Victor Greiff, Sepp Hochreiter, Günter Klambauer
We show that the attention mechanism of transformer architectures is actually the update rule of modern Hopfield networks that can store exponentially many patterns.
no code implementations • NeurIPS Workshop Deep_Invers 2019 • Michael Gillhofer, Hubert Ramsauer, Johannes Brandstetter, Bernhard Schäfl, Sepp Hochreiter
We propose a GAN based approach to solve inverse problems which have non-differential or non-continuous forward relations.