no code implementations • 23 Mar 2023 • Shaobo Lin, Kun Wang, Xingyu Zeng, Rui Zhao
To construct a representative synthetic training dataset, we maximize the diversity of the selected images via a sample-based and cluster-based method.
no code implementations • 28 Feb 2023 • Shaobo Lin, Kun Wang, Xingyu Zeng, Rui Zhao
Specifically, we first discover the base images which contain the FP of novel categories and select a certain amount of samples from them for the base and novel categories balance.
no code implementations • 26 Jan 2023 • Shaobo Lin, Xingyu Zeng, Rui Zhao
The generalization power of the pre-trained model is the key for few-shot deep learning.
no code implementations • 12 Oct 2022 • Shaobo Lin, Xingyu Zeng, Rui Zhao
Conventional training of deep neural networks usually requires a substantial amount of data with expensive human annotations.
no code implementations • 29 Sep 2021 • Shaobo Lin, Xingyu Zeng, Rui Zhao
Conventional training of deep neural networks usually requires a substantial amount of data with expensive human annotations.
no code implementations • 30 Apr 2016 • Shaobo Lin, Jinshan Zeng, Xiaoqin Zhang
In this paper, we aim at developing scalable neural network-type learning systems.
no code implementations • 6 May 2015 • Shaobo Lin, Yao Wang, Lin Xu
Boosting is a learning scheme that combines weak prediction rules to produce a strong composite estimator, with the underlying intuition that one can obtain accurate prediction rules by combining "rough" ones.
no code implementations • 7 Mar 2015 • Shaobo Lin, Xingping Sun, Zongben Xu, Jinshan Zeng
On one hand, based on the worst-case learning rate analysis, we show that the regularization term in polynomial kernel regression is not necessary.
no code implementations • 14 Feb 2015 • Shaobo Lin
Due to the localization property in the frequency domain, we prove that the regularization parameter of the kernel ridge regression associated with the needlet kernel can decrease arbitrarily fast.
no code implementations • 13 Nov 2014 • Lin Xu, Shaobo Lin, Jinshan Zeng, Zongben Xu
Orthogonal greedy learning (OGL) is a stepwise learning scheme that adds a new atom from a dictionary via the steepest gradient descent and build the estimator via orthogonal projecting the target function to the space spanned by the selected atoms in each greedy step.
no code implementations • 24 Jan 2014 • Shaobo Lin, Xia Liu, Jian Fang, Zongben Xu
On one hand, we find that the randomness causes an additional uncertainty problem of ELM, both in approximation and learning.
no code implementations • 19 Dec 2013 • Shaobo Lin, Jinshan Zeng, Jian Fang, Zongben Xu
Regularization is a well recognized powerful strategy to improve the performance of a learning machine and $l^q$ regularization schemes with $0<q<\infty$ are central in use.
no code implementations • 25 Jul 2013 • Shaobo Lin, Chen Xu, Jingshan Zeng, Jian Fang
To facilitate the use of $l^{q}$-regularization, we intend to seek for a modeling strategy where an elaborative selection on $q$ is avoidable.