no code implementations • 29 Sep 2021 • Davide Coppola, Hwee Kuan Lee, Cuntai Guan
Experiments on the CIFAR10 dataset showed that using only $10\%$ of the full training set, the proposed method was able to adequately defend the model against the AutoPGD attack while maintaining a classification accuracy on clean images outperforming the model with adversarial training by $7\%$.