Search Results for author: Thomas Gärtner

Found 9 papers, 2 papers with code

Expectation-Complete Graph Representations with Homomorphisms

2 code implementations9 Jun 2023 Pascal Welke, Maximilian Thiessen, Fabian Jogl, Thomas Gärtner

We investigate novel random graph embeddings that can be computed in expected polynomial time and that are able to distinguish all non-isomorphic graphs in expectation.

Graph Learning

Disentangled Representations using Trained Models

no code implementations29 Sep 2021 Eva Smit, Thomas Gärtner, Patrick Forré

With the help of the implicit function theorem we show how, using a diverse set of models that have already been trained on the data, to select a pair of data points that have a common value of interpretable factors.

Corresponding Projections for Orphan Screening

2 code implementations30 Nov 2018 Sven Giesselbach, Katrin Ullrich, Michael Kamp, Daniel Paurat, Thomas Gärtner

We propose a novel transfer learning approach for orphan screening called corresponding projections.

Drug Discovery Transfer Learning

Effective Parallelisation for Machine Learning

no code implementations NeurIPS 2017 Michael Kamp, Mario Boley, Olana Missura, Thomas Gärtner

We present a novel parallelisation scheme that simplifies the adaptation of learning algorithms to growing amounts of data as well as growing needs for accurate and confident predictions in critical applications.

BIG-bench Machine Learning Open-Ended Question Answering

Scalable Learning in Reproducing Kernel Krein Spaces

no code implementations6 Sep 2018 Dino Oglic, Thomas Gärtner

We provide the first mathematically complete derivation of the Nystr\"om method for low-rank approximation of indefinite kernels and propose an efficient method for finding an approximate eigendecomposition of such kernel matrices.

Time Series Time Series Analysis

Nyström Method with Kernel K-means++ Samples as Landmarks

no code implementations ICML 2017 Dino Oglic, Thomas Gärtner

We investigate, theoretically and empirically, the effectiveness of kernel K-means++ samples as landmarks in the Nyström method for low-rank approximation of kernel matrices.

Clustering

Greedy Feature Construction

no code implementations NeurIPS 2016 Dino Oglic, Thomas Gärtner

We present an effective method for supervised feature construction.

regression

Predicting Dynamic Difficulty

no code implementations NeurIPS 2011 Olana Missura, Thomas Gärtner

Motivated by applications in electronic games as well as teaching systems, we investigate the problem of dynamic difficulty adjustment.

Cannot find the paper you are looking for? You can Submit a new open access paper.