Search Results for author: Thomas Gaertner

Found 4 papers, 1 papers with code

Improving Expert Specialization in Mixture of Experts

no code implementations28 Feb 2023 Yamuna Krishnamurthy, Chris Watkins, Thomas Gaertner

Mixture of experts (MoE), introduced over 20 years ago, is the simplest gated modular neural network architecture.

Continual Learning

Active Learning of Convex Halfspaces on Graphs

no code implementations NeurIPS 2021 Maximilian Thiessen, Thomas Gaertner

We systematically study the query complexity of learning geodesically convex halfspaces on graphs.

Active Learning

Learning in Reproducing Kernel Kreı̆n Spaces

no code implementations ICML 2018 Dino Oglic, Thomas Gaertner

We formulate a novel regularized risk minimization problem for learning in reproducing kernel Kre{ı̆}n spaces and show that the strong representer theorem applies to it.

Cannot find the paper you are looking for? You can Submit a new open access paper.