no code implementations • 28 Feb 2023 • Yamuna Krishnamurthy, Chris Watkins, Thomas Gaertner
Mixture of experts (MoE), introduced over 20 years ago, is the simplest gated modular neural network architecture.
no code implementations • NeurIPS 2021 • Maximilian Thiessen, Thomas Gaertner
We systematically study the query complexity of learning geodesically convex halfspaces on graphs.
1 code implementation • NeurIPS 2020 • Aiham Taleb, Winfried Loetzsch, Noel Danz, Julius Severin, Thomas Gaertner, Benjamin Bergner, Christoph Lippert
Self-supervised learning methods have witnessed a recent surge of interest after proving successful in multiple application fields.
no code implementations • ICML 2018 • Dino Oglic, Thomas Gaertner
We formulate a novel regularized risk minimization problem for learning in reproducing kernel Kre{ı̆}n spaces and show that the strong representer theorem applies to it.