ChoiceNet: CNN learning through choice of multiple feature map representations

We introduce a new architecture called ChoiceNet where each layer of the network is highly connected with skip connections and channelwise concatenations. This enables the network to alleviate the problem of vanishing gradients, reduces the number of parameters without sacrificing performance, and encourages feature reuse. We evaluate our proposed architecture on three benchmark datasets for object recognition tasks (ImageNet, CIFAR- 10, CIFAR-100, SVHN) and on a semantic segmentation dataset (CamVid).

Results in Papers With Code
(↓ scroll down to see all results)