no code implementations • 29 Nov 2022 • Hamish Flynn, David Reeb, Melih Kandemir, Jan Peters
In many of these applications, principled algorithms with strong performance guarantees would be very much appreciated.
no code implementations • 21 Oct 2022 • David Reeb, Kanil Patel, Karim Barsim, Martin Schiegg, Sebastian Gerwinn
Assessing the validity of a real-world system with respect to given quality criteria is a common yet costly task in industrial applications due to the vast number of required real-world tests.
1 code implementation • 23 Jun 2022 • Manuel Nonnenmacher, Lukas Oldenburg, Ingo Steinwart, David Reeb
We therefore devise ExpCLR, a novel contrastive learning approach built on an objective that utilizes expert features to encourage both properties for the learned representation.
no code implementations • 7 Mar 2022 • Hamish Flynn, David Reeb, Melih Kandemir, Jan Peters
We present a PAC-Bayesian analysis of lifelong learning.
1 code implementation • NeurIPS 2021 • Manuel Nonnenmacher, Thomas Pfeil, Ingo Steinwart, David Reeb
We validate SOSP-H by comparing it to our second method SOSP-I that uses a well-established Hessian approximation, and to numerous state-of-the-art methods.
no code implementations • 4 Nov 2020 • Manuel Nonnenmacher, David Reeb, Ingo Steinwart
The loss surface of an overparameterized neural network (NN) possesses many global minima of zero training error.
1 code implementation • NeurIPS 2020 • Jakob Lindinger, David Reeb, Christoph Lippert, Barbara Rakitsch
Deep Gaussian Processes learn probabilistic data representations for supervised learning by cascading multiple Gaussian Processes.
no code implementations • 25 Sep 2019 • Manuel Nonnenmacher, David Reeb, Ingo Steinwart
The recently developed link between strongly overparametrized neural networks (NNs) and kernel methods has opened a new way to understand puzzling features of NNs, such as their convergence and generalization behaviors.
1 code implementation • NeurIPS 2018 • David Reeb, Andreas Doerr, Sebastian Gerwinn, Barbara Rakitsch
Gaussian Processes (GPs) are a generic modelling tool for supervised learning.