no code implementations • 28 Feb 2024 • Laura Manduchi, Kushagra Pandey, Robert Bamler, Ryan Cotterell, Sina Däubener, Sophie Fellenz, Asja Fischer, Thomas Gärtner, Matthias Kirchler, Marius Kloft, Yingzhen Li, Christoph Lippert, Gerard de Melo, Eric Nalisnick, Björn Ommer, Rajesh Ranganath, Maja Rudolph, Karen Ullrich, Guy Van Den Broeck, Julia E Vogt, Yixin Wang, Florian Wenzel, Frank Wood, Stephan Mandt, Vincent Fortuin
The field of deep generative modeling has grown rapidly and consistently over the years.
2 code implementations • 9 Jun 2023 • Pascal Welke, Maximilian Thiessen, Fabian Jogl, Thomas Gärtner
We investigate novel random graph embeddings that can be computed in expected polynomial time and that are able to distinguish all non-isomorphic graphs in expectation.
no code implementations • 29 Sep 2021 • Eva Smit, Thomas Gärtner, Patrick Forré
With the help of the implicit function theorem we show how, using a diverse set of models that have already been trained on the data, to select a pair of data points that have a common value of interpretable factors.
2 code implementations • 30 Nov 2018 • Sven Giesselbach, Katrin Ullrich, Michael Kamp, Daniel Paurat, Thomas Gärtner
We propose a novel transfer learning approach for orphan screening called corresponding projections.
no code implementations • NeurIPS 2017 • Michael Kamp, Mario Boley, Olana Missura, Thomas Gärtner
We present a novel parallelisation scheme that simplifies the adaptation of learning algorithms to growing amounts of data as well as growing needs for accurate and confident predictions in critical applications.
no code implementations • 6 Sep 2018 • Dino Oglic, Thomas Gärtner
We provide the first mathematically complete derivation of the Nystr\"om method for low-rank approximation of indefinite kernels and propose an efficient method for finding an approximate eigendecomposition of such kernel matrices.
no code implementations • ICML 2017 • Dino Oglic, Thomas Gärtner
We investigate, theoretically and empirically, the effectiveness of kernel K-means++ samples as landmarks in the Nyström method for low-rank approximation of kernel matrices.
no code implementations • NeurIPS 2016 • Dino Oglic, Thomas Gärtner
We present an effective method for supervised feature construction.
no code implementations • NeurIPS 2011 • Olana Missura, Thomas Gärtner
Motivated by applications in electronic games as well as teaching systems, we investigate the problem of dynamic difficulty adjustment.