Search Results for author: Zheng-Chu Guo

Found 11 papers, 0 papers with code

Optimality of Robust Online Learning

no code implementations20 Apr 2023 Zheng-Chu Guo, Andreas Christmann, Lei Shi

In this paper, we study an online learning algorithm with a robust loss function $\mathcal{L}_{\sigma}$ for regression over a reproducing kernel Hilbert space (RKHS).

regression

Online Regularized Learning Algorithm for Functional Data

no code implementations24 Nov 2022 Yuan Mao, Zheng-Chu Guo

However, it remains an open problem to obtain capacity independent convergence rates for the estimation error of the unregularized online learning algorithm with decaying step-size.

Capacity dependent analysis for functional online learning algorithms

no code implementations25 Sep 2022 Xin Guo, Zheng-Chu Guo, Lei Shi

This article provides convergence analysis of online stochastic gradient descent algorithms for functional linear models.

Coefficient-based Regularized Distribution Regression

no code implementations26 Aug 2022 Yuan Mao, Lei Shi, Zheng-Chu Guo

Compared with the kernel methods for distribution regression in the literature, the algorithm under consideration does not require the kernel to be symmetric and positive semi-definite and hence provides a simple paradigm for designing indefinite kernel methods, which enriches the theme of the distribution regression.

regression

Realizing data features by deep nets

no code implementations1 Jan 2019 Zheng-Chu Guo, Lei Shi, Shao-Bo Lin

Based on refined covering number estimates, we find that, to realize some complex data features, deep nets can improve the performances of shallow neural networks (shallow nets for short) without requiring additional capacity costs.

Fast and Strong Convergence of Online Learning Algorithms

no code implementations10 Oct 2017 Zheng-Chu Guo, Lei Shi

In this paper, we study the online learning algorithm without explicit regularization terms.

Convergence of Unregularized Online Learning Algorithms

no code implementations9 Aug 2017 Yunwen Lei, Lei Shi, Zheng-Chu Guo

In this paper we study the convergence of online gradient descent algorithms in reproducing kernel Hilbert spaces (RKHSs) without regularization.

Learning Theory of Distributed Regression with Bias Corrected Regularization Kernel Network

no code implementations7 Aug 2017 Zheng-Chu Guo, Lei Shi, Qiang Wu

Regularization kernel network is an effective and widely used method for nonlinear regression analysis.

Learning Theory regression

Learning from networked examples

no code implementations11 May 2014 Yuyi Wang, Jan Ramon, Zheng-Chu Guo

Many machine learning algorithms are based on the assumption that training examples are drawn independently.

Guaranteed Classification via Regularized Similarity Learning

no code implementations13 Jun 2013 Zheng-Chu Guo, Yiming Ying

In this paper, we propose a regularized similarity learning formulation associated with general matrix-norms, and establish their generalization bounds.

BIG-bench Machine Learning Classification +3

Learning from networked examples in a k-partite graph

no code implementations3 Jun 2013 Yuyi Wang, Jan Ramon, Zheng-Chu Guo

Many machine learning algorithms are based on the assumption that training examples are drawn independently.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.