no code implementations • 2 Sep 2024 • Andreas Christmann, Yunwen Lei
In this paper some methods to use the empirical bootstrap approach for stochastic gradient descent (SGD) to minimize the empirical risk over a separable Hilbert space are investigated from the view point of algorithmic stability and statistical robustness.
no code implementations • 20 Apr 2023 • Zheng-Chu Guo, Andreas Christmann, Lei Shi
In this paper, we study an online learning algorithm with a robust loss function $\mathcal{L}_{\sigma}$ for regression over a reproducing kernel Hilbert space (RKHS).
no code implementations • 29 Jan 2021 • Hannes Köhler, Andreas Christmann
Regularized kernel-based methods such as support vector machines (SVMs) typically depend on the underlying probability measure $\mathrm{P}$ (respectively an empirical measure $\mathrm{D}_n$ in applications) as well as on the regularization parameter $\lambda$ and the kernel $k$.
no code implementations • 29 Oct 2020 • Patrick Gensler, Andreas Christmann
It is shown that many results on the statistical robustness of kernel-based pairwise learning can be derived under basically no assumptions on the input and output spaces.
no code implementations • 22 Sep 2017 • Andreas Christmann, Dao-Hong Xiang, Ding-Xuan Zhou
However, the actually used kernel often depends on one or on a few hyperparameters or the kernel is even data dependent in a much more complicated manner.
no code implementations • 15 Apr 2016 • Andreas Christmann, Florian Dumpert, Dao-Hong Xiang
Statistical machine learning plays an important role in modern statistics and computer science.
no code implementations • 12 Oct 2015 • Andreas Christmann, Ding-Xuan Zhou
Regularized empirical risk minimization including support vector machines plays an important role in machine learning theory.
no code implementations • 14 May 2014 • Andreas Christmann, Ding-Xuan Zhou
Additive models play an important role in semiparametric statistics.
no code implementations • NeurIPS 2010 • Andreas Christmann, Ingo Steinwart
We apply this technique for the following special cases: universal kernels on the set of probability measures, universal kernels based on Fourier transforms, and universal kernels for signal processing.
no code implementations • NeurIPS 2009 • Ingo Steinwart, Andreas Christmann
We prove an oracle inequality for generic regularized empirical risk minimization algorithms learning from $\a$-mixing processes.
no code implementations • NeurIPS 2008 • Ingo Steinwart, Andreas Christmann
In this paper lower and upper bounds for the number of support vectors are derived for support vector machines (SVMs) based on the epsilon-insensitive loss function.