Self-Attention Capsule Networks for Object Classification

29 Apr 2019  ·  Assaf Hoogi, Brian Wilcox, Yachee Gupta, Daniel L. Rubin ·

We propose a novel architecture for object classification, called Self-Attention Capsule Networks (SACN). SACN is the first model that incorporates the Self-Attention mechanism as an integral layer within the Capsule Network (CapsNet). While the Self-Attention mechanism supplies a long-range dependencies, results in selecting the more dominant image regions to focus on, the CapsNet analyzes the relevant features and their spatial correlations inside these regions only. The features are extracted in the convolutional layer. Then, the Self-Attention layer learns to suppress irrelevant regions based on features analysis and highlights salient features useful for a specific task. The attention map is then fed into the CapsNet primary layer that is followed by a classification layer. The proposed SACN model was designed to solve two main limitations of the baseline CapsNet - analysis of complex data and significant computational load. In this work, we use a shallow CapsNet architecture and compensates for the absence of a deeper network by using the Self-Attention module to significantly improve the results. The proposed Self-Attention CapsNet architecture was extensively evaluated on six different datasets, mainly on three different medical sets, in addition to the natural MNIST, SVHN and CIFAR10. The model was able to classify images and their patches with diverse and complex backgrounds better than the baseline CapsNet. As a result, the proposed Self-Attention CapsNet significantly improved classification performance within and across different datasets and outperformed the baseline CapsNet, ResNet-18 and DenseNet-40 not only in classification accuracy but also in robustness.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods