no code implementations • 10 Jul 2024 • Dinghao Cao, Zheng-Chu Guo, Lei Shi
This paper presents a comprehensive study on the convergence rates of the stochastic gradient descent (SGD) algorithm when applied to overparameterized two-layer neural networks.
no code implementations • 20 Apr 2023 • Zheng-Chu Guo, Andreas Christmann, Lei Shi
In this paper, we study an online learning algorithm with a robust loss function $\mathcal{L}_{\sigma}$ for regression over a reproducing kernel Hilbert space (RKHS).
no code implementations • 24 Nov 2022 • Yuan Mao, Zheng-Chu Guo
However, it remains an open problem to obtain capacity independent convergence rates for the estimation error of the unregularized online learning algorithm with decaying step-size.
no code implementations • 25 Sep 2022 • Xin Guo, Zheng-Chu Guo, Lei Shi
This article provides convergence analysis of online stochastic gradient descent algorithms for functional linear models.
no code implementations • 26 Aug 2022 • Yuan Mao, Lei Shi, Zheng-Chu Guo
Compared with the kernel methods for distribution regression in the literature, the algorithm under consideration does not require the kernel to be symmetric and positive semi-definite and hence provides a simple paradigm for designing indefinite kernel methods, which enriches the theme of the distribution regression.
no code implementations • 1 Jan 2019 • Zheng-Chu Guo, Lei Shi, Shao-Bo Lin
Based on refined covering number estimates, we find that, to realize some complex data features, deep nets can improve the performances of shallow neural networks (shallow nets for short) without requiring additional capacity costs.
no code implementations • 10 Oct 2017 • Zheng-Chu Guo, Lei Shi
In this paper, we study the online learning algorithm without explicit regularization terms.
no code implementations • 9 Aug 2017 • Yunwen Lei, Lei Shi, Zheng-Chu Guo
In this paper we study the convergence of online gradient descent algorithms in reproducing kernel Hilbert spaces (RKHSs) without regularization.
no code implementations • 7 Aug 2017 • Zheng-Chu Guo, Lei Shi, Qiang Wu
Regularization kernel network is an effective and widely used method for nonlinear regression analysis.
no code implementations • 11 May 2014 • Yuyi Wang, Jan Ramon, Zheng-Chu Guo
Many machine learning algorithms are based on the assumption that training examples are drawn independently.
no code implementations • 13 Jun 2013 • Zheng-Chu Guo, Yiming Ying
In this paper, we propose a regularized similarity learning formulation associated with general matrix-norms, and establish their generalization bounds.
no code implementations • 3 Jun 2013 • Yuyi Wang, Jan Ramon, Zheng-Chu Guo
Many machine learning algorithms are based on the assumption that training examples are drawn independently.