Ensemble learning in CNN augmented with fully connected subnetworks

19 Mar 2020  ·  Daiki Hirata, Norikazu Takahashi ·

Convolutional Neural Networks (CNNs) have shown remarkable performance in general object recognition tasks. In this paper, we propose a new model called EnsNet which is composed of one base CNN and multiple Fully Connected SubNetworks (FCSNs). In this model, the set of feature-maps generated by the last convolutional layer in the base CNN is divided along channels into disjoint subsets, and these subsets are assigned to the FCSNs. Each of the FCSNs is trained independent of others so that it can predict the class label from the subset of the feature-maps assigned to it. The output of the overall model is determined by majority vote of the base CNN and the FCSNs. Experimental results using the MNIST, Fashion-MNIST and CIFAR-10 datasets show that the proposed approach further improves the performance of CNNs. In particular, an EnsNet achieves a state-of-the-art error rate of 0.16% on MNIST.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Classification MNIST EnsNet (Ensemble learning in CNN augmented with fully connected subnetworks) Percentage error 0.16 # 2
Accuracy 99.84 # 2

Methods


No methods listed for this paper. Add relevant methods here