EraseReLU: A Simple Way to Ease the Training of Deep Convolution Neural Networks

22 Sep 2017  ·  Xuanyi Dong, Guoliang Kang, Kun Zhan, Yi Yang ·

For most state-of-the-art architectures, Rectified Linear Unit (ReLU) becomes a standard component accompanied with each layer. Although ReLU can ease the network training to an extent, the character of blocking negative values may suppress the propagation of useful information and leads to the difficulty of optimizing very deep Convolutional Neural Networks (CNNs). Moreover, stacking layers with nonlinear activations is hard to approximate the intrinsic linear transformations between feature representations. In this paper, we investigate the effect of erasing ReLUs of certain layers and apply it to various representative architectures following deterministic rules. It can ease the optimization and improve the generalization performance for very deep CNN models. We find two key factors being essential to the performance improvement: 1) the location where ReLU should be erased inside the basic module; 2) the proportion of basic modules to erase ReLU; We show that erasing the last ReLU layer of all basic modules in a network usually yields improved performance. In experiments, our approach successfully improves the performance of various representative architectures, and we report the improved results on SVHN, CIFAR-10/100, and ImageNet. Moreover, we achieve competitive single-model performance on CIFAR-100 with 16.53% error rate compared to state-of-the-art.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Classification SVHN EraseReLU Percentage error 1.54 # 12

Methods