1 code implementation • 30 Nov 2022 • Jijie Wu, Dongliang Chang, Aneeshan Sain, Xiaoxu Li, Zhanyu Ma, Jie Cao, Jun Guo, Yi-Zhe Song
Conventional few-shot learning methods however cannot be naively adopted for this fine-grained setting -- a quick pilot study reveals that they in fact push for the opposite (i. e., lower inter-class variations and higher intra-class variations).
no code implementations • 17 May 2021 • Xiaoxu Li, Xiaochen Yang, Zhanyu Ma, Jing-Hao Xue
Few-shot image classification is a challenging problem that aims to achieve the human level of recognition based only on a small number of training images.
no code implementations • 25 Jan 2021 • Yurong Guo, Zhanyu Ma, Xiaoxu Li, Yuan Dong
We consider this method of measuring relation of samples only models the sample-to-sample relation, while neglects the specificity of different tasks.
1 code implementation • 29 Nov 2020 • Xiaoxu Li, Jijie Wu, Zhuo Sun, Zhanyu Ma, Jie Cao, Jing-Hao Xue
Motivated by this, we propose a so-called \textit{Bi-Similarity Network} (\textit{BSNet}) that consists of a single embedding module and a bi-similarity module of two similarity measures.
no code implementations • 12 Oct 2020 • Zeyu Song, Dongliang Chang, Zhanyu Ma, Xiaoxu Li, Zheng-Hua Tan
The loss function is a key component in deep learning models.
1 code implementation • 27 Jun 2020 • Xiaoxu Li, Liyun Yu, Xiaochen Yang, Zhanyu Ma, Jing-Hao Xue, Jie Cao, Jun Guo
Despite achieving state-of-the-art performance, deep learning methods generally require a large amount of labeled data during training and may suffer from overfitting when the sample size is small.
no code implementations • 22 May 2020 • Xiaoxu Li, Zhuo Sun, Jing-Hao Xue, Zhanyu Ma
Few-shot meta-learning has been recently reviving with expectations to mimic humanity's fast adaption to new concepts based on prior knowledge.
1 code implementation • 20 Apr 2020 • Xiaoxu Li, Dongliang Chang, Zhanyu Ma, Zheng-Hua Tan, Jing-Hao Xue, Jie Cao, Jingyi Yu, Jun Guo
A deep neural network of multiple nonlinear layers forms a large function space, which can easily lead to overfitting when it encounters small-sample data.
3 code implementations • 11 Feb 2020 • Dongliang Chang, Yifeng Ding, Jiyang Xie, Ayan Kumar Bhunia, Xiaoxu Li, Zhanyu Ma, Ming Wu, Jun Guo, Yi-Zhe Song
The proposed loss function, termed as mutual-channel loss (MC-Loss), consists of two channel-specific components: a discriminality component and a diversity component.
Ranked #29 on Fine-Grained Image Classification on FGVC Aircraft
no code implementations • 14 Feb 2019 • Zhanyu Ma, Dongliang Chang, Xiaoxu Li
Experimental results on two fine-grained vehicle datasets, the Stanford Cars-196 dataset and the Comp Cars dataset, demonstrate that the proposed layer could improve classification accuracies of deep neural networks on fine-grained vehicle classification in the situation that a massive of parameters are reduced.