Search Results for author: Andreas Christmann

Found 10 papers, 0 papers with code

Optimality of Robust Online Learning

no code implementations20 Apr 2023 Zheng-Chu Guo, Andreas Christmann, Lei Shi

In this paper, we study an online learning algorithm with a robust loss function $\mathcal{L}_{\sigma}$ for regression over a reproducing kernel Hilbert space (RKHS).

regression

Total Stability of SVMs and Localized SVMs

no code implementations29 Jan 2021 Hannes Köhler, Andreas Christmann

Regularized kernel-based methods such as support vector machines (SVMs) typically depend on the underlying probability measure $\mathrm{P}$ (respectively an empirical measure $\mathrm{D}_n$ in applications) as well as on the regularization parameter $\lambda$ and the kernel $k$.

On the robustness of kernel-based pairwise learning

no code implementations29 Oct 2020 Patrick Gensler, Andreas Christmann

It is shown that many results on the statistical robustness of kernel-based pairwise learning can be derived under basically no assumptions on the input and output spaces.

Total stability of kernel methods

no code implementations22 Sep 2017 Andreas Christmann, Dao-Hong Xiang, Ding-Xuan Zhou

However, the actually used kernel often depends on one or on a few hyperparameters or the kernel is even data dependent in a much more complicated manner.

On the Robustness of Regularized Pairwise Learning Methods Based on Kernels

no code implementations12 Oct 2015 Andreas Christmann, Ding-Xuan Zhou

Regularized empirical risk minimization including support vector machines plays an important role in machine learning theory.

BIG-bench Machine Learning Learning Theory

Universal Kernels on Non-Standard Input Spaces

no code implementations NeurIPS 2010 Andreas Christmann, Ingo Steinwart

We apply this technique for the following special cases: universal kernels on the set of probability measures, universal kernels based on Fourier transforms, and universal kernels for signal processing.

text-classification Text Classification

Fast Learning from Non-i.i.d. Observations

no code implementations NeurIPS 2009 Ingo Steinwart, Andreas Christmann

We prove an oracle inequality for generic regularized empirical risk minimization algorithms learning from $\a$-mixing processes.

Sparsity of SVMs that use the epsilon-insensitive loss

no code implementations NeurIPS 2008 Ingo Steinwart, Andreas Christmann

In this paper lower and upper bounds for the number of support vectors are derived for support vector machines (SVMs) based on the epsilon-insensitive loss function.

Cannot find the paper you are looking for? You can Submit a new open access paper.