Inhibition-augmented ConvNets

1 Jan 2021  ·  Nicola Strisciuglio, George Azzopardi, Nicolai Petkov ·

Convolutional Networks (ConvNets) suffer from insufficient robustness to common corruptions and perturbations of the input, unseen during training. We address this problem by including a form of response inhibition in the processing of the early layers of existing ConvNets and assess the resulting representation power and generalization on corrupted inputs. The considered inhibition mechanism consists of a non-linear computation that is inspired by the push-pull inhibition exhibited by some neurons in the visual system of the brain. In practice, each convolutional filter (push) in the early layers of conventional ConvNets is coupled with another filter (pull), which responds to the preferred pattern of the corresponding push filter but of opposite contrast. The rectified responses of the push and pull filter pairs are then combined by a linear function. This results in a representation that suppresses responses to noisy patterns (e.g. texture, Gaussian, shot, distortion, and others) and accentuates responses to preferred patterns. We deploy the layer into existing architectures, (Wide-)ResNet and DenseNet, and propose new residual and dense push-pull layers. We demonstrate that ConvNets that embed this inhibition into the initial layers are able to learn representations that are robust to several types of input corruptions. We validated the approach on the ImageNet and CIFAR data sets and their corrupted and perturbed versions, ImageNet-C/P and CIFAR-C/P. It turns out that the push-pull inhibition enhances the overall robustness and generalization of ConvNets to corrupted and perturbed input data. Besides the improvement in generalization, notable is the fact that ConvNets with push-pull inhibition have a sparser representation than conventional ones without inhibition. The code and trained models will be made available.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods