Black-box Adversarial ML Attack on Modulation Classification

1 Aug 2019  ·  Muhammad Usama, Junaid Qadir, Ala Al-Fuqaha ·

Recently, many deep neural networks (DNN) based modulation classification schemes have been proposed in the literature. We have evaluated the robustness of two famous such modulation classifiers (based on the techniques of convolutional neural networks and long short term memory) against adversarial machine learning attacks in black-box settings. We have used Carlini \& Wagner (C-W) attack for performing the adversarial attack. To the best of our knowledge, the robustness of these modulation classifiers has not been evaluated through C-W attack before. Our results clearly indicate that state-of-art deep machine learning-based modulation classifiers are not robust against adversarial attacks.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here