An Orthogonal Classifier for Improving the Adversarial Robustness of Neural Networks

19 May 2021  ·  Cong Xu, Xiang Li, Min Yang ·

Neural networks are susceptible to artificially designed adversarial perturbations. Recent efforts have shown that imposing certain modifications on classification layer can improve the robustness of the neural networks. In this paper, we explicitly construct a dense orthogonal weight matrix whose entries have the same magnitude, thereby leading to a novel robust classifier. The proposed classifier avoids the undesired structural redundancy issue in previous work. Applying this classifier in standard training on clean data is sufficient to ensure the high accuracy and good robustness of the model. Moreover, when extra adversarial samples are used, better robustness can be further obtained with the help of a special worst-case loss. Experimental results show that our method is efficient and competitive to many state-of-the-art defensive approaches. Our code is available at \url{https://github.com/MTandHJ/roboc}.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Adversarial Attack CIFAR-10 Xu et al. Attack: PGD20 78.680 # 1
Attack: DeepFool 51.310 # 1
Attack: AutoAttack 44.150 # 2

Methods


No methods listed for this paper. Add relevant methods here