Search Results for author: Shao-Bo Lin

Found 37 papers, 4 papers with code

Learning and approximation capability of orthogonal super greedy algorithm

no code implementations18 Sep 2014 Jian Fang, Shao-Bo Lin, Zongben Xu

We consider the approximation capability of orthogonal super greedy algorithms (OSGA) and its applications in supervised learning.

Shrinkage degree in $L_2$-re-scale boosting for regression

no code implementations17 May 2015 Lin Xu, Shao-Bo Lin, Yao Wang, Zongben Xu

Re-scale boosting (RBoosting) is a variant of boosting which can essentially improve the generalization performance of boosting learning.

regression

Divide and Conquer Local Average Regression

no code implementations23 Jan 2016 Xiangyu Chang, Shao-Bo Lin, Yao Wang

After theoretically analyzing the pros and cons, we find that although the divide and conquer local average regression can reach the optimal learning rate, the restric- tion to the number of data blocks is a bit strong, which makes it only feasible for small number of data blocks.

regression

Greedy Criterion in Orthogonal Greedy Learning

no code implementations20 Apr 2016 Lin Xu, Shao-Bo Lin, Jinshan Zeng, Xia Liu, Zongben Xu

In this paper, we find that SGD is not the unique greedy criterion and introduce a new greedy criterion, called "$\delta$-greedy threshold" for learning.

Distributed learning with regularized least squares

no code implementations11 Aug 2016 Shao-Bo Lin, Xin Guo, Ding-Xuan Zhou

We study distributed learning with the least squares regularization scheme in a reproducing kernel Hilbert space (RKHS).

Learning rates for classification with Gaussian kernels

no code implementations28 Feb 2017 Shao-Bo Lin, Jinshan Zeng, Xiangyu Chang

This paper aims at refined error analysis for binary classification using support vector machine (SVM) with Gaussian kernel and convex loss.

Binary Classification Classification +2

Global Convergence of Block Coordinate Descent in Deep Learning

2 code implementations1 Mar 2018 Jinshan Zeng, Tim Tsz-Kit Lau, Shao-Bo Lin, Yuan YAO

Deep learning has aroused extensive attention due to its great empirical success.

Construction of neural networks for realization of localized deep learning

no code implementations9 Mar 2018 Charles K. Chui, Shao-Bo Lin, Ding-Xuan Zhou

The subject of deep learning has recently attracted users of machine learning from various disciplines, including: medical diagnosis and bioinformatics, financial market analysis and online advertisement, speech and handwriting recognition, computer vision and natural language processing, time series forecasting, and search engines.

Dimensionality Reduction Handwriting Recognition +3

Generalization and Expressivity for Deep Nets

no code implementations10 Mar 2018 Shao-Bo Lin

Generalization and expressivity are two widely used measurements to quantify theoretical behaviors of deep learning.

Learning Theory

Learning through deterministic assignment of hidden parameters

no code implementations22 Mar 2018 Jian Fang, Shao-Bo Lin, Zongben Xu

Supervised learning frequently boils down to determining hidden and bright parameters in a parameterized hypothesis space based on finite input-output samples.

Realizing data features by deep nets

no code implementations1 Jan 2019 Zheng-Chu Guo, Lei Shi, Shao-Bo Lin

Based on refined covering number estimates, we find that, to realize some complex data features, deep nets can improve the performances of shallow neural networks (shallow nets for short) without requiring additional capacity costs.

On ADMM in Deep Learning: Convergence and Saturation-Avoidance

1 code implementation6 Feb 2019 Jinshan Zeng, Shao-Bo Lin, Yuan YAO, Ding-Xuan Zhou

In this paper, we develop an alternating direction method of multipliers (ADMM) for deep neural networks training with sigmoid-type activation functions (called \textit{sigmoid-ADMM pair}), mainly motivated by the gradient-free nature of ADMM in avoiding the saturation of sigmoid-type activations and the advantages of deep neural networks with sigmoid-type activations (called deep sigmoid nets) over their rectified linear unit (ReLU) counterparts (called deep ReLU nets) in terms of approximation.

Deep Neural Networks for Rotation-Invariance Approximation and Learning

no code implementations3 Apr 2019 Charles K. Chui, Shao-Bo Lin, Ding-Xuan Zhou

Based on the tree architecture, the objective of this paper is to design deep neural networks with two or more hidden layers (called deep nets) for realization of radial functions so as to enable rotational invariance for near-optimal function approximation in an arbitrarily high dimensional Euclidian space.

Distributed filtered hyperinterpolation for noisy data on the sphere

no code implementations6 Oct 2019 Shao-Bo Lin, Yu Guang Wang, Ding-Xuan Zhou

This paper develops distributed filtered hyperinterpolation for noisy data on the sphere, which assigns the data fitting task to multiple servers to find a good approximation of the mapping of input and output data.

Geophysics Model Selection

Fast Polynomial Kernel Classification for Massive Data

1 code implementation24 Nov 2019 Jinshan Zeng, Minrun Wu, Shao-Bo Lin, Ding-Xuan Zhou

In the era of big data, it is desired to develop efficient machine learning algorithms to tackle massive data challenges such as storage bottleneck, algorithmic scalability, and interpretability.

Classification General Classification

Realization of spatial sparseness by deep ReLU nets with massive data

no code implementations16 Dec 2019 Charles K. Chui, Shao-Bo Lin, Bo Zhang, Ding-Xuan Zhou

The great success of deep learning poses urgent challenges for understanding its working mechanism and rationality.

Learning Theory

Adaptive Stopping Rule for Kernel-based Gradient Descent Algorithms

no code implementations9 Jan 2020 Xiangyu Chang, Shao-Bo Lin

In this paper, we propose an adaptive stopping rule for kernel-based gradient descent (KGD) algorithms.

Learning Theory

Distributed Learning with Dependent Samples

no code implementations10 Feb 2020 Zirui Sun, Shao-Bo Lin

This paper focuses on learning rate analysis of distributed kernel ridge regression for strong mixing sequences.

regression

Distributed Kernel Ridge Regression with Communications

no code implementations27 Mar 2020 Shao-Bo Lin, Di Wang, Ding-Xuan Zhou

This paper focuses on generalization performance analysis for distributed algorithms in the framework of learning theory.

Learning Theory regression

Depth Selection for Deep ReLU Nets in Feature Extraction and Generalization

no code implementations1 Apr 2020 Zhi Han, Siquan Yu, Shao-Bo Lin, Ding-Xuan Zhou

One of the most important challenge of deep learning is to figure out relations between a feature and the depth of deep neural networks (deep nets for short) to reflect the necessity of depth.

Feature Engineering Representation Learning

Kernel Interpolation of High Dimensional Scattered Data

no code implementations3 Sep 2020 Shao-Bo Lin, Xiangyu Chang, Xingping Sun

Data sites selected from modeling high-dimensional problems often appear scattered in non-paternalistic ways.

Clustering Vocal Bursts Intensity Prediction

Kernel-based L_2-Boosting with Structure Constraints

no code implementations16 Sep 2020 Yao Wang, Xin Guo, Shao-Bo Lin

Numerically, we carry out a series of simulations to show the promising performance of KReBooT in terms of its good generalization, near over-fitting resistance and structure constraints.

Universal Consistency of Deep Convolutional Neural Networks

no code implementations23 Jun 2021 Shao-Bo Lin, Kaidong Wang, Yao Wang, Ding-Xuan Zhou

Compared with avid research activities of deep convolutional neural networks (DCNNs) in practice, the study of theoretical behaviors of DCNNs lags heavily behind.

Nyström Regularization for Time Series Forecasting

no code implementations13 Nov 2021 Zirui Sun, Mingwei Dai, Yao Wang, Shao-Bo Lin

This paper focuses on learning rate analysis of Nystr\"{o}m regularization with sequential sub-sampling for $\tau$-mixing time series.

Time Series Time Series Forecasting

Generalization Performance of Empirical Risk Minimization on Over-parameterized Deep ReLU Nets

no code implementations28 Nov 2021 Shao-Bo Lin, Yao Wang, Ding-Xuan Zhou

In this paper, we study the generalization performance of global minima for implementing empirical risk minimization (ERM) on over-parameterized deep ReLU nets.

Radial Basis Function Approximation with Distributively Stored Data on Spheres

no code implementations5 Dec 2021 Han Feng, Shao-Bo Lin, Ding-Xuan Zhou

This paper proposes a distributed weighted regularized least squares algorithm (DWRLS) based on spherical radial basis functions and spherical quadrature rules to tackle spherical data that are stored across numerous local servers and cannot be shared with each other.

Kernel-Based Distributed Q-Learning: A Scalable Reinforcement Learning Approach for Dynamic Treatment Regimes

no code implementations21 Feb 2023 Di Wang, Yao Wang, Shaojie Tang, Shao-Bo Lin

The novelties of our research are as follows: 1) From a methodological perspective, we present a novel and scalable approach for generating DTRs by combining distributed learning with Q-learning.

Learning Theory Medical Diagnosis +2

Sketching with Spherical Designs for Noisy Data Fitting on Spheres

no code implementations8 Mar 2023 Shao-Bo Lin, Di Wang, Ding-Xuan Zhou

These interesting findings show that the proposed sketching strategy is capable of fitting massive and noisy data on spheres.

Deep Convolutional Neural Networks with Zero-Padding: Feature Extraction and Learning

1 code implementation30 Jul 2023 Zhi Han, Baichen Liu, Shao-Bo Lin, Ding-Xuan Zhou

This paper studies the performance of deep convolutional neural networks (DCNNs) with zero-padding in feature extraction and learning.

Translation

Optimal Approximation and Learning Rates for Deep Convolutional Neural Networks

no code implementations7 Aug 2023 Shao-Bo Lin

This paper focuses on approximation and learning performance analysis for deep convolutional neural networks with zero-padding and max-pooling.

Adaptive Distributed Kernel Ridge Regression: A Feasible Distributed Learning Scheme for Data Silos

no code implementations8 Sep 2023 Di Wang, Xiaotong Liu, Shao-Bo Lin, Ding-Xuan Zhou

Data silos, mainly caused by privacy and interoperability, significantly constrain collaborations among different organizations with similar data for the same purpose.

Decision Making regression

Distributed Uncertainty Quantification of Kernel Interpolation on Spheres

no code implementations25 Oct 2023 Shao-Bo Lin, Xingping Sun, Di Wang

For radial basis function (RBF) kernel interpolation of scattered data, Schaback in 1995 proved that the attainable approximation error and the condition number of the underlying interpolation matrix cannot be made small simultaneously.

Uncertainty Quantification

Lifting the Veil: Unlocking the Power of Depth in Q-learning

no code implementations27 Oct 2023 Shao-Bo Lin, Tao Li, Shaojie Tang, Yao Wang, Ding-Xuan Zhou

In this paper, we make fundamental contributions to the field of reinforcement learning by answering to the following three questions: Why does deep Q-learning perform so well?

Learning Theory Management +2

Adaptive Parameter Selection for Kernel Ridge Regression

no code implementations10 Dec 2023 Shao-Bo Lin

This paper focuses on parameter selection issues of kernel ridge regression (KRR).

Learning Theory regression

Weighted Spectral Filters for Kernel Interpolation on Spheres: Estimates of Prediction Accuracy for Noisy Data

no code implementations16 Jan 2024 Xiaotong Liu, Jinxin Wang, Di Wang, Shao-Bo Lin

In this paper, we introduce a weighted spectral filter approach to reduce the condition number of the kernel matrix and then stabilize kernel interpolation.

Image Reconstruction

Integral Operator Approaches for Scattered Data Fitting on Spheres

no code implementations27 Jan 2024 Shao-Bo Lin

This paper focuses on scattered data fitting problems on spheres.

Cannot find the paper you are looking for? You can Submit a new open access paper.