Search Results for author: Shao-Bo Lin

Found 27 papers, 3 papers with code

Radial Basis Function Approximation with Distributively Stored Data on Spheres

no code implementations5 Dec 2021 Han Feng, Shao-Bo Lin, Ding-Xuan Zhou

This paper proposes a distributed weighted regularized least squares algorithm (DWRLS) based on spherical radial basis functions and spherical quadrature rules to tackle spherical data that are stored across numerous local servers and cannot be shared with each other.

Generalization Performance of Empirical Risk Minimization on Over-parameterized Deep ReLU Nets

no code implementations28 Nov 2021 Shao-Bo Lin, Yao Wang, Ding-Xuan Zhou

In this paper, we study the generalization performance of global minima for implementing empirical risk minimization (ERM) on over-parameterized deep ReLU nets.

Nyström Regularization for Time Series Forecasting

no code implementations13 Nov 2021 Zirui Sun, Mingwei Dai, Yao Wang, Shao-Bo Lin

This paper focuses on learning rate analysis of Nystr\"{o}m regularization with sequential sub-sampling for $\tau$-mixing time series.

Time Series Time Series Forecasting

Universal Consistency of Deep Convolutional Neural Networks

no code implementations23 Jun 2021 Shao-Bo Lin, Kaidong Wang, Yao Wang, Ding-Xuan Zhou

Compared with avid research activities of deep convolutional neural networks (DCNNs) in practice, the study of theoretical behaviors of DCNNs lags heavily behind.

Kernel-based L_2-Boosting with Structure Constraints

no code implementations16 Sep 2020 Yao Wang, Xin Guo, Shao-Bo Lin

Numerically, we carry out a series of simulations to show the promising performance of KReBooT in terms of its good generalization, near over-fitting resistance and structure constraints.

Kernel Interpolation of High Dimensional Scattered Data

no code implementations3 Sep 2020 Shao-Bo Lin, Xiangyu Chang, Xingping Sun

Data sites selected from modeling high-dimensional problems often appear scattered in non-paternalistic ways.

Depth Selection for Deep ReLU Nets in Feature Extraction and Generalization

no code implementations1 Apr 2020 Zhi Han, Siquan Yu, Shao-Bo Lin, Ding-Xuan Zhou

One of the most important challenge of deep learning is to figure out relations between a feature and the depth of deep neural networks (deep nets for short) to reflect the necessity of depth.

Feature Engineering Representation Learning

Distributed Kernel Ridge Regression with Communications

no code implementations27 Mar 2020 Shao-Bo Lin, Di Wang, Ding-Xuan Zhou

This paper focuses on generalization performance analysis for distributed algorithms in the framework of learning theory.

Learning Theory

Distributed Learning with Dependent Samples

no code implementations10 Feb 2020 Zirui Sun, Shao-Bo Lin

This paper focuses on learning rate analysis of distributed kernel ridge regression for strong mixing sequences.

Adaptive Stopping Rule for Kernel-based Gradient Descent Algorithms

no code implementations9 Jan 2020 Xiangyu Chang, Shao-Bo Lin

In this paper, we propose an adaptive stopping rule for kernel-based gradient descent (KGD) algorithms.

Learning Theory

Realization of spatial sparseness by deep ReLU nets with massive data

no code implementations16 Dec 2019 Charles K. Chui, Shao-Bo Lin, Bo Zhang, Ding-Xuan Zhou

The great success of deep learning poses urgent challenges for understanding its working mechanism and rationality.

Learning Theory

Fast Polynomial Kernel Classification for Massive Data

1 code implementation24 Nov 2019 Jinshan Zeng, Minrun Wu, Shao-Bo Lin, Ding-Xuan Zhou

In the era of big data, it is highly desired to develop efficient machine learning algorithms to tackle massive data challenges such as storage bottleneck, algorithmic scalability, and interpretability.

General Classification

Distributed filtered hyperinterpolation for noisy data on the sphere

no code implementations6 Oct 2019 Shao-Bo Lin, Yu Guang Wang, Ding-Xuan Zhou

This paper develops distributed filtered hyperinterpolation for noisy data on the sphere, which assigns the data fitting task to multiple servers to find a good approximation of the mapping of input and output data.

Model Selection

Deep Neural Networks for Rotation-Invariance Approximation and Learning

no code implementations3 Apr 2019 Charles K. Chui, Shao-Bo Lin, Ding-Xuan Zhou

Based on the tree architecture, the objective of this paper is to design deep neural networks with two or more hidden layers (called deep nets) for realization of radial functions so as to enable rotational invariance for near-optimal function approximation in an arbitrarily high dimensional Euclidian space.

On ADMM in Deep Learning: Convergence and Saturation-Avoidance

1 code implementation6 Feb 2019 Jinshan Zeng, Shao-Bo Lin, Yuan YAO, Ding-Xuan Zhou

In this paper, we develop an alternating direction method of multipliers (ADMM) for deep neural networks training with sigmoid-type activation functions (called \textit{sigmoid-ADMM pair}), mainly motivated by the gradient-free nature of ADMM in avoiding the saturation of sigmoid-type activations and the advantages of deep neural networks with sigmoid-type activations (called deep sigmoid nets) over their rectified linear unit (ReLU) counterparts (called deep ReLU nets) in terms of approximation.

Realizing data features by deep nets

no code implementations1 Jan 2019 Zheng-Chu Guo, Lei Shi, Shao-Bo Lin

Based on refined covering number estimates, we find that, to realize some complex data features, deep nets can improve the performances of shallow neural networks (shallow nets for short) without requiring additional capacity costs.

Learning through deterministic assignment of hidden parameters

no code implementations22 Mar 2018 Jian Fang, Shao-Bo Lin, Zongben Xu

Supervised learning frequently boils down to determining hidden and bright parameters in a parameterized hypothesis space based on finite input-output samples.

Generalization and Expressivity for Deep Nets

no code implementations10 Mar 2018 Shao-Bo Lin

Generalization and expressivity are two widely used measurements to quantify theoretical behaviors of deep learning.

Learning Theory

Construction of neural networks for realization of localized deep learning

no code implementations9 Mar 2018 Charles K. Chui, Shao-Bo Lin, Ding-Xuan Zhou

The subject of deep learning has recently attracted users of machine learning from various disciplines, including: medical diagnosis and bioinformatics, financial market analysis and online advertisement, speech and handwriting recognition, computer vision and natural language processing, time series forecasting, and search engines.

Dimensionality Reduction Handwriting Recognition +3

Global Convergence of Block Coordinate Descent in Deep Learning

2 code implementations1 Mar 2018 Jinshan Zeng, Tim Tsz-Kit Lau, Shao-Bo Lin, Yuan YAO

Deep learning has aroused extensive attention due to its great empirical success.

Learning rates for classification with Gaussian kernels

no code implementations28 Feb 2017 Shao-Bo Lin, Jinshan Zeng, Xiangyu Chang

This paper aims at refined error analysis for binary classification using support vector machine (SVM) with Gaussian kernel and convex loss.

General Classification

Distributed learning with regularized least squares

no code implementations11 Aug 2016 Shao-Bo Lin, Xin Guo, Ding-Xuan Zhou

We study distributed learning with the least squares regularization scheme in a reproducing kernel Hilbert space (RKHS).

Greedy Criterion in Orthogonal Greedy Learning

no code implementations20 Apr 2016 Lin Xu, Shao-Bo Lin, Jinshan Zeng, Xia Liu, Zongben Xu

In this paper, we find that SGD is not the unique greedy criterion and introduce a new greedy criterion, called "$\delta$-greedy threshold" for learning.

Divide and Conquer Local Average Regression

no code implementations23 Jan 2016 Xiangyu Chang, Shao-Bo Lin, Yao Wang

After theoretically analyzing the pros and cons, we find that although the divide and conquer local average regression can reach the optimal learning rate, the restric- tion to the number of data blocks is a bit strong, which makes it only feasible for small number of data blocks.

Shrinkage degree in $L_2$-re-scale boosting for regression

no code implementations17 May 2015 Lin Xu, Shao-Bo Lin, Yao Wang, Zongben Xu

Re-scale boosting (RBoosting) is a variant of boosting which can essentially improve the generalization performance of boosting learning.

Learning and approximation capability of orthogonal super greedy algorithm

no code implementations18 Sep 2014 Jian Fang, Shao-Bo Lin, Zongben Xu

We consider the approximation capability of orthogonal super greedy algorithms (OSGA) and its applications in supervised learning.

Cannot find the paper you are looking for? You can Submit a new open access paper.