Search Results for author: Shaobo Lin

Found 13 papers, 0 papers with code

Explore the Power of Synthetic Data on Few-shot Object Detection

no code implementations23 Mar 2023 Shaobo Lin, Kun Wang, Xingyu Zeng, Rui Zhao

To construct a representative synthetic training dataset, we maximize the diversity of the selected images via a sample-based and cluster-based method.

Few-Shot Object Detection Object +3

An Effective Crop-Paste Pipeline for Few-shot Object Detection

no code implementations28 Feb 2023 Shaobo Lin, Kun Wang, Xingyu Zeng, Rui Zhao

Specifically, we first discover the base images which contain the FP of novel categories and select a certain amount of samples from them for the base and novel categories balance.

Data Augmentation Few-Shot Object Detection +1

Explore the Power of Dropout on Few-shot Learning

no code implementations26 Jan 2023 Shaobo Lin, Xingyu Zeng, Rui Zhao

The generalization power of the pre-trained model is the key for few-shot deep learning.

Deep Learning Few-Shot Image Classification +3

A Unified Framework with Meta-dropout for Few-shot Learning

no code implementations12 Oct 2022 Shaobo Lin, Xingyu Zeng, Rui Zhao

Conventional training of deep neural networks usually requires a substantial amount of data with expensive human annotations.

Few-Shot Image Classification Few-Shot Learning +2

MDFL: A UNIFIED FRAMEWORK WITH META-DROPOUT FOR FEW-SHOT LEARNING

no code implementations29 Sep 2021 Shaobo Lin, Xingyu Zeng, Rui Zhao

Conventional training of deep neural networks usually requires a substantial amount of data with expensive human annotations.

Few-Shot Image Classification Few-Shot Learning +2

Constructive neural network learning

no code implementations30 Apr 2016 Shaobo Lin, Jinshan Zeng, Xiaoqin Zhang

In this paper, we aim at developing scalable neural network-type learning systems.

Re-scale boosting for regression and classification

no code implementations6 May 2015 Shaobo Lin, Yao Wang, Lin Xu

Boosting is a learning scheme that combines weak prediction rules to produce a strong composite estimator, with the underlying intuition that one can obtain accurate prediction rules by combining "rough" ones.

Classification General Classification +1

Model selection of polynomial kernel regression

no code implementations7 Mar 2015 Shaobo Lin, Xingping Sun, Zongben Xu, Jinshan Zeng

On one hand, based on the worst-case learning rate analysis, we show that the regularization term in polynomial kernel regression is not necessary.

Model Selection regression

Nonparametric regression using needlet kernels for spherical data

no code implementations14 Feb 2015 Shaobo Lin

Due to the localization property in the frequency domain, we prove that the regularization parameter of the kernel ridge regression associated with the needlet kernel can decrease arbitrarily fast.

regression

Greedy metrics in orthogonal greedy learning

no code implementations13 Nov 2014 Lin Xu, Shaobo Lin, Jinshan Zeng, Zongben Xu

Orthogonal greedy learning (OGL) is a stepwise learning scheme that adds a new atom from a dictionary via the steepest gradient descent and build the estimator via orthogonal projecting the target function to the space spanned by the selected atoms in each greedy step.

Model Selection

Is Extreme Learning Machine Feasible? A Theoretical Assessment (Part II)

no code implementations24 Jan 2014 Shaobo Lin, Xia Liu, Jian Fang, Zongben Xu

On one hand, we find that the randomness causes an additional uncertainty problem of ELM, both in approximation and learning.

Learning rates of $l^q$ coefficient regularization learning with Gaussian kernel

no code implementations19 Dec 2013 Shaobo Lin, Jinshan Zeng, Jian Fang, Zongben Xu

Regularization is a well recognized powerful strategy to improve the performance of a learning machine and $l^q$ regularization schemes with $0<q<\infty$ are central in use.

Learning Theory

Does generalization performance of $l^q$ regularization learning depend on $q$? A negative example

no code implementations25 Jul 2013 Shaobo Lin, Chen Xu, Jingshan Zeng, Jian Fang

To facilitate the use of $l^{q}$-regularization, we intend to seek for a modeling strategy where an elaborative selection on $q$ is avoidable.

Cannot find the paper you are looking for? You can Submit a new open access paper.