Efficient Model for Image Classification With Regularization Tricks

1 Feb 2020  ·  Taehyeon Kim, Jonghyup Kim, Seyoung Yun ·

In the MicroNet Challenge 2019, competitors attempted to design the neural network architecture with fewer resource budgets, e.g., the number of parameters and FLOPS. In this study, we describe the approaches of team KAIST, using which they won the second and third places, respectively, in the CIFAR-100 classification task in the contest. We solve the task into four steps. First, we design a novel baseline network appropriate for the CIFAR-100 dataset. Second, we train this network using our novel structural regularization methods, which penalize the orthogonality of weights and replace the ground-truth label of each data with a noise vector that has class-wise similarity information from the representative feature vectors of each class in the course of training. Third, we seek the most potent data-augmentation methods for significant improvements in accuracy. At last, we perform the sparse training via a pruning technique. Our final score is 0.0054, which represents 370x improvements over the baseline for the CIFAR100 dataset. This is the only work that finished in the top 10% of both parameter storage and computation over the CIFAR-100 classification task. The source code is at https://github.com/Kthyeon/micronet_neurips_challenge.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods