Lets keep it simple, Using simple architectures to outperform deeper and more complex architectures

22 Aug 2016  ·  Seyyed Hossein Hasanpour, Mohammad Rouhani, Mohsen Fayyaz, Mohammad Sabokrou ·

Major winning Convolutional Neural Networks (CNNs), such as AlexNet, VGGNet, ResNet, GoogleNet, include tens to hundreds of millions of parameters, which impose considerable computation and memory overhead. This limits their practical use for training, optimization and memory efficiency. On the contrary, light-weight architectures, being proposed to address this issue, mainly suffer from low accuracy. These inefficiencies mostly stem from following an ad hoc procedure. We propose a simple architecture, called SimpleNet, based on a set of designing principles, with which we empirically show, a well-crafted yet simple and reasonably deep architecture can perform on par with deeper and more complex architectures. SimpleNet provides a good tradeoff between the computation/memory efficiency and the accuracy. Our simple 13-layer architecture outperforms most of the deeper and complex architectures to date such as VGGNet, ResNet, and GoogleNet on several well-known benchmarks while having 2 to 25 times fewer number of parameters and operations. This makes it very handy for embedded systems or systems with computational and memory limitations. We achieved state-of-the-art result on CIFAR10 outperforming several heavier architectures, near state of the art on MNIST and competitive results on CIFAR100 and SVHN. We also outperformed the much larger and deeper architectures such as VGGNet and popular variants of ResNets among others on the ImageNet dataset. Models are made available at: https://github.com/Coderx7/SimpleNet

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Image Classification CIFAR-10 SimpleNetv1 Percentage correct 95.51 # 120
Image Classification CIFAR-100 SimpleNetv1 Percentage correct 78.37 # 132
Image Classification ImageNet SimpleNetV1-small-075-correct-labels Top 1 Accuracy 75.66 # 871
Number of params 3M # 366
Image Classification ImageNet SimpleNetV1-9m-correct-labels Top 1 Accuracy 81.24 # 599
Number of params 9.5M # 472
Image Classification ImageNet SimpleNetV1-small-075 Top 1 Accuracy 68.15 # 960
Number of params 3M # 366
Image Classification ImageNet SimpleNetV1-small-05-correct-labels Top 1 Accuracy 69.11 # 955
Number of params 1.5M # 353
Image Classification ImageNet SimpleNetV1-9m Top 1 Accuracy 74.17 # 906
Number of params 9.5M # 472
Image Classification ImageNet SimpleNetV1-5m Top 1 Accuracy 71.94 # 931
Number of params 5.7M # 428
Image Classification ImageNet SimpleNetV1-small-05 Top 1 Accuracy 61.52 # 973
Number of params 1.5M # 353
Image Classification ImageNet SimpleNetV1-5m-correct-labels Top 1 Accuracy 79.12 # 713
Number of params 5.7M # 428
Image Classification MNIST SimpleNetv1 Percentage error 0.25 # 10

Methods