How to fine-tune deep neural networks in few-shot learning?

1 Dec 2020  ·  Peng Peng, Jiugen Wang ·

Deep learning has been widely used in data-intensive applications. However, training a deep neural network often requires a large data set. When there is not enough data available for training, the performance of deep learning models is even worse than that of shallow networks. It has been proved that few-shot learning can generalize to new tasks with few training samples. Fine-tuning of a deep model is simple and effective few-shot learning method. However, how to fine-tune deep learning models (fine-tune convolution layer or BN layer?) still lack deep investigation. Hence, we study how to fine-tune deep models through experimental comparison in this paper. Furthermore, the weight of the models is analyzed to verify the feasibility of the fine-tuning method.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods