Competitive Multi-scale Convolution

18 Nov 2015  ·  Zhibin Liao, Gustavo Carneiro ·

In this paper, we introduce a new deep convolutional neural network (ConvNet) module that promotes competition among a set of multi-scale convolutional filters. This new module is inspired by the inception module, where we replace the original collaborative pooling stage (consisting of a concatenation of the multi-scale filter outputs) by a competitive pooling represented by a maxout activation unit. This extension has the following two objectives: 1) the selection of the maximum response among the multi-scale filters prevents filter co-adaptation and allows the formation of multiple sub-networks within the same model, which has been shown to facilitate the training of complex learning problems; and 2) the maxout unit reduces the dimensionality of the outputs from the multi-scale filters. We show that the use of our proposed module in typical deep ConvNets produces classification results that are either better than or comparable to the state of the art on the following benchmark datasets: MNIST, CIFAR-10, CIFAR-100 and SVHN.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Image Classification CIFAR-10 CMsC Percentage correct 93.1 # 160
Image Classification CIFAR-100 CMsC Percentage correct 72.4 # 158
Image Classification MNIST CMsC Percentage error 0.3 # 17
Image Classification SVHN CMsC Percentage error 1.8 # 22

Methods