no code implementations • 23 Dec 2023 • Abhishake, Tapio Helin, Nicole Mücke
To achieve these results, the structure of reproducing kernel Hilbert spaces is leveraged to establish minimax rates in the statistical learning setting.
no code implementations • 14 Sep 2023 • Mike Nguyen, Nicole Mücke
We analyze the generalization properties of two-layer neural networks in the neural tangent kernel (NTK) regime, trained with gradient descent (GD).
no code implementations • 29 Aug 2023 • Mike Nguyen, Nicole Mücke
Random feature approximation is arguably one of the most popular techniques to speed up kernel methods in large scale algorithms and provides a theoretical approach to the analysis of deep neural networks.
no code implementations • 16 Nov 2022 • Mattes Mollenhauer, Nicole Mücke, T. J. Sullivan
However, we prove that, in terms of spectral properties and regularisation theory, this inverse problem is equivalent to the known compact inverse problem associated with scalar response regression.
no code implementations • 20 Oct 2022 • Mike Nguyen, Charly Kirst, Nicole Mücke
We consider distributed learning using constant stepsize SGD (DSGD) over several devices, each sending a final model update to a central server.
no code implementations • 21 Oct 2021 • Nicole Mücke, Enrico Reiss, Jonas Rungenhagen, Markus Klein
While large training datasets generally offer improvement in model performance, the training process becomes computationally expensive and time consuming.
no code implementations • 9 Jun 2021 • Bernhard Stankewitz, Nicole Mücke, Lorenzo Rosasco
Optimization in machine learning typically deals with the minimization of empirical objectives defined by training data.
no code implementations • 24 Oct 2020 • Nicole Mücke
Stochastic gradient descent (SGD) provides a simple and efficient way to solve a broad range of machine learning problems.
no code implementations • 18 Jun 2020 • Nicole Mücke, Enrico Reiss
Stochastic Gradient Descent (SGD) has become the method of choice for solving a broad range of machine learning problems.
no code implementations • 27 May 2019 • Ernesto De Vito, Nicole Mücke, Lorenzo Rosasco
We study reproducing kernel Hilbert spaces (RKHS) on a Riemannian manifold.
no code implementations • 25 May 2019 • Nicole Mücke, Ingo Steinwart
Moreover, we show that the same phenomenon occurs for DNNs with zero training error and sufficiently large architectures.
no code implementations • NeurIPS 2019 • Nicole Mücke, Gergely Neu, Lorenzo Rosasco
While stochastic gradient descent (SGD) is one of the major workhorses in machine learning, the learning properties of many practically used variants are poorly understood.
no code implementations • 15 Apr 2018 • Nicole Mücke
We address the problem of {\it adaptivity} in the framework of reproducing kernel Hilbert space (RKHS) regression.
no code implementations • 12 Nov 2016 • Gilles Blanchard, Nicole Mücke
These questions have been considered in past literature, but only under specific assumptions about the decay, typically polynomial, of the spectrum of the the kernel mapping covariance operator.
no code implementations • 24 Oct 2016 • Gilles Blanchard, Nicole Mücke
We consider a distributed learning approach in supervised learning for a large class of spectral regularization methods in an RKHS framework.
no code implementations • 14 Apr 2016 • Gilles Blanchard, Nicole Mücke
We consider a statistical inverse learning problem, where we observe the image of a function $f$ through a linear operator $A$ at i. i. d.