SpinalNet: Deep Neural Network with Gradual Input

Over the past few years, deep neural networks (DNNs) have garnered remarkable success in a diverse range of real-world applications. However, DNNs consider a large number of inputs and consist of a large number of parameters, resulting in high computational demand... We study the human somatosensory system and propose the SpinalNet to achieve higher accuracy with less computational resources. In a typical neural network (NN) architecture, the hidden layers receive inputs in the first layer and then transfer the intermediate outcomes to the next layer. In the proposed SpinalNet, the structure of hidden layers allocates to three sectors: 1) Input row, 2) Intermediate row, and 3) output row. The intermediate row of the SpinalNet contains a few neurons. The role of input segmentation is in enabling each hidden layer to receive a part of the inputs and outputs of the previous layer. Therefore, the number of incoming weights in a hidden layer is significantly lower than traditional DNNs. As all layers of the SpinalNet directly contributes to the output row, the vanishing gradient problem does not exist. We also investigate the SpinalNet fully-connected layer to several well-known DNN models and perform traditional learning and transfer learning. We observe significant error reductions with lower computational costs in most of the DNNs. We have also obtained the state-of-the-art (SOTA) performance for QMNIST, Kuzushiji-MNIST, EMNIST (Letters, Digits, and Balanced), STL-10, Bird225, Fruits 360, and Caltech-101 datasets. The scripts of the proposed SpinalNet are available with the following link: https://github.com/dipuk0506/SpinalNet read more

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Fine-Grained Image Classification Bird-225 VGG-19bn Accuracy 98.67 # 3
Fine-Grained Image Classification Bird-225 VGG-19bn (Spinal FC) Accuracy 99.02 # 2
Fine-Grained Image Classification Caltech-101 Wide-ResNet-101 (Spinal FC) Top-1 Error Rate 2.68% # 1
Fine-Grained Image Classification Caltech-101 Wide-ResNet-101 Top-1 Error Rate 2.89% # 2
Fine-Grained Image Classification Caltech-101 VGG-19bn (Spinal FC) Top-1 Error Rate 6.84% # 5
Image Classification EMNIST-Balanced VGG-5 Accuracy 91.04 # 2
Image Classification EMNIST-Balanced VGG-5(Spinal FC) Accuracy 91.05 # 1
Image Classification EMNIST-Letters VGG-5 Accuracy 95.86 # 2
Image Classification EMNIST-Letters VGG-5(Spinal FC) Accuracy 95.88 # 1
Image Classification Flowers-102 Wide-ResNet-101 (Spinal FC) Accuracy 99.30 # 9
Fine-Grained Image Classification Fruits-360 VGG-19bn Accuracy (%) 99.90 # 1
Image Classification Kuzushiji-MNIST VGG-5 (Spinal FC) Accuracy 99.15 # 1
Error 0.85 # 1
Image Classification MNIST VGG-5 (Spinal FC) Percentage error 0.28 # 14
Accuracy 99.72 # 8
Fine-Grained Image Classification Oxford 102 Flowers Wide-ResNet-101 (Spinal FC) Accuracy 99.30% # 2
Image Classification STL-10 VGG-19bn Percentage correct 95.44 # 14
Image Classification STL-10 Wide-ResNet-101 (Spinal FC) Percentage correct 98.66 # 1

Methods