Search Results for author: Nicole Mücke

Found 16 papers, 0 papers with code

Statistical inverse learning problems with random observations

no code implementations23 Dec 2023 Abhishake, Tapio Helin, Nicole Mücke

To achieve these results, the structure of reproducing kernel Hilbert spaces is leveraged to establish minimax rates in the statistical learning setting.

Experimental Design

How many Neurons do we need? A refined Analysis for Shallow Networks trained with Gradient Descent

no code implementations14 Sep 2023 Mike Nguyen, Nicole Mücke

We analyze the generalization properties of two-layer neural networks in the neural tangent kernel (NTK) regime, trained with gradient descent (GD).

regression

Random feature approximation for general spectral methods

no code implementations29 Aug 2023 Mike Nguyen, Nicole Mücke

Random feature approximation is arguably one of the most popular techniques to speed up kernel methods in large scale algorithms and provides a theoretical approach to the analysis of deep neural networks.

Learning linear operators: Infinite-dimensional regression as a well-behaved non-compact inverse problem

no code implementations16 Nov 2022 Mattes Mollenhauer, Nicole Mücke, T. J. Sullivan

However, we prove that, in terms of spectral properties and regularisation theory, this inverse problem is equivalent to the known compact inverse problem associated with scalar response regression.

regression

Local SGD in Overparameterized Linear Regression

no code implementations20 Oct 2022 Mike Nguyen, Charly Kirst, Nicole Mücke

We consider distributed learning using constant stepsize SGD (DSGD) over several devices, each sending a final model update to a central server.

regression

Data splitting improves statistical performance in overparametrized regimes

no code implementations21 Oct 2021 Nicole Mücke, Enrico Reiss, Jonas Rungenhagen, Markus Klein

While large training datasets generally offer improvement in model performance, the training process becomes computationally expensive and time consuming.

regression

From inexact optimization to learning via gradient concentration

no code implementations9 Jun 2021 Bernhard Stankewitz, Nicole Mücke, Lorenzo Rosasco

Optimization in machine learning typically deals with the minimization of empirical objectives defined by training data.

Stochastic Gradient Descent Meets Distribution Regression

no code implementations24 Oct 2020 Nicole Mücke

Stochastic gradient descent (SGD) provides a simple and efficient way to solve a broad range of machine learning problems.

regression

Stochastic Gradient Descent in Hilbert Scales: Smoothness, Preconditioning and Earlier Stopping

no code implementations18 Jun 2020 Nicole Mücke, Enrico Reiss

Stochastic Gradient Descent (SGD) has become the method of choice for solving a broad range of machine learning problems.

Reproducing kernel Hilbert spaces on manifolds: Sobolev and Diffusion spaces

no code implementations27 May 2019 Ernesto De Vito, Nicole Mücke, Lorenzo Rosasco

We study reproducing kernel Hilbert spaces (RKHS) on a Riemannian manifold.

Empirical Risk Minimization in the Interpolating Regime with Application to Neural Network Learning

no code implementations25 May 2019 Nicole Mücke, Ingo Steinwart

Moreover, we show that the same phenomenon occurs for DNNs with zero training error and sufficiently large architectures.

Learning Theory

Beating SGD Saturation with Tail-Averaging and Minibatching

no code implementations NeurIPS 2019 Nicole Mücke, Gergely Neu, Lorenzo Rosasco

While stochastic gradient descent (SGD) is one of the major workhorses in machine learning, the learning properties of many practically used variants are poorly understood.

Adaptivity for Regularized Kernel Methods by Lepskii's Principle

no code implementations15 Apr 2018 Nicole Mücke

We address the problem of {\it adaptivity} in the framework of reproducing kernel Hilbert space (RKHS) regression.

regression

Kernel regression, minimax rates and effective dimensionality: beyond the regular case

no code implementations12 Nov 2016 Gilles Blanchard, Nicole Mücke

These questions have been considered in past literature, but only under specific assumptions about the decay, typically polynomial, of the spectrum of the the kernel mapping covariance operator.

regression

Parallelizing Spectral Algorithms for Kernel Learning

no code implementations24 Oct 2016 Gilles Blanchard, Nicole Mücke

We consider a distributed learning approach in supervised learning for a large class of spectral regularization methods in an RKHS framework.

regression

Optimal Rates For Regularization Of Statistical Inverse Learning Problems

no code implementations14 Apr 2016 Gilles Blanchard, Nicole Mücke

We consider a statistical inverse learning problem, where we observe the image of a function $f$ through a linear operator $A$ at i. i. d.

Cannot find the paper you are looking for? You can Submit a new open access paper.