Representation Quality Of Neural Networks Links To Adversarial Attacks and Defences

15 Jun 2019  ·  Shashank Kotyan, Danilo Vasconcellos Vargas, Moe Matsuki ·

Neural networks have been shown vulnerable to a variety of adversarial algorithms. A crucial step to understanding the rationale for this lack of robustness is to assess the potential of the neural networks' representation to encode the existing features. Here, we propose a method to understand the representation quality of the neural networks using a novel test based on Zero-Shot Learning, entitled Raw Zero-Shot. The principal idea is that, if an algorithm learns rich features, such features should be able to interpret "unknown" classes as an aggregate of previously learned features. This is because unknown classes usually share several regular features with recognised classes, given the features learned are general enough. We further introduce two metrics to assess these learned features to interpret unknown classes. One is based on inter-cluster validation technique (Davies-Bouldin Index), and the other is based on the distance to an approximated ground-truth. Experiments suggest that adversarial defences improve the representation of the classifiers, further suggesting that to improve the robustness of the classifiers, one has to improve the representation quality also. Experiments also reveal a strong association (a high Pearson Correlation and low p-value) between the metrics and adversarial attacks. Interestingly, the results indicate that dynamic routing networks such as CapsNet have better representation while current deeper neural networks are trading off representation quality for accuracy. Code available at http://bit.ly/RepresentationMetrics.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here