no code implementations • 27 Sep 2023 • Mahmoud Nazzal, Nura Aljaafari, Ahmed Sawalmeh, Abdallah Khreishah, Muhammad Anan, Abdulelah Algosaibi, Mohammed Alnaeem, Adel Aldalbahi, Abdulaziz Alhumam, Conrado P. Vizcarra, Shadan Alhamed
In this paper, we propose GABAttack, a novel genetic algorithm-based backdoor attack against federated learning for network traffic classification.
1 code implementation • 22 Aug 2023 • Mahmoud Nazzal, Issa Khalil, Abdallah Khreishah, NhatHai Phan, Yao Ma
In this work, we call the attack that manipulates several nodes in the DMG concurrently a multi-instance evasion attack.
1 code implementation • 28 Feb 2023 • Mahmoud Nazzal, Abdallah Khreishah, Joyoung Lee, Shaahin Angizi, Ala Al-Fuqaha, Mohsen Guizani
This approach minimizes inter-cloudlet communication thereby alleviating the communication overhead of the decentralized approach while promoting scalability due to cloudlet-level decentralization.
1 code implementation • 10 Nov 2022 • Khang Tran, Phung Lai, NhatHai Phan, Issa Khalil, Yao Ma, Abdallah Khreishah, My Thai, Xintao Wu
Graph neural networks (GNNs) are susceptible to privacy inference attacks (PIAs), given their ability to learn joint representation from features and edges among nodes in graph data.
no code implementations • 5 Sep 2022 • Guanxiong Liu, Abdallah Khreishah, Fatima Sharadgah, Issa Khalil
Through mathematical analysis, we show that if the attacker is perfect in injecting the backdoor, the Trojan infected model will be trained to learn the appropriate prediction confidence bound, which is used to distinguish Trojan and benign inputs under arbitrary perturbations.
no code implementations • 18 Jan 2022 • Phung Lai, NhatHai Phan, Issa Khalil, Abdallah Khreishah, Xintao Wu
This paper explores previously unknown backdoor risks in HyperNet-based personalized federated learning (HyperNetFL) through poisoning attacks.
no code implementations • 26 Oct 2021 • Izzat Alsmadi, Kashif Ahmad, Mahmoud Nazzal, Firoj Alam, Ala Al-Fuqaha, Abdallah Khreishah, Abdulelah Algosaibi
These vulnerabilities allow adversaries to launch a diversified set of adversarial attacks on these algorithms in different applications of social media text processing.
no code implementations • 7 Sep 2021 • Guanxiong Liu, Hang Shi, Abbas Kiani, Abdallah Khreishah, Jo Young Lee, Nirwan Ansari, Chengjun Liu, Mustafa Yousef
In this paper, we focus on two common traffic monitoring tasks, congestion detection, and speed detection, and propose a two-tier edge computing based model that takes into account of both the limited computing capability in cloudlets and the unstable network condition to the TMC.
no code implementations • 3 Sep 2021 • Guanxiong Liu, Issa Khalil, Abdallah Khreishah, NhatHai Phan
In this work, we show how to jointly exploit adversarial perturbation and model poisoning vulnerabilities to practically launch a new stealthy attack, dubbed AdvTrojan.
no code implementations • 1 Jan 2021 • Guanxiong Liu, Issa Khalil, Abdallah Khreishah, Hai Phan
In this work, we naturally unify adversarial examples and Trojan backdoors into a new stealthy attack, that is activated only when 1) adversarial perturbation is injected into the input examples and 2) a Trojan backdoor is used to poison the training process simultaneously.
no code implementations • 11 Jul 2020 • Guanxiong Liu, Issa Khalil, Abdallah Khreishah, Abdulelah Algosaibi, Adel Aldalbahi, Mohammed Alaneem, Abdulaziz Alhumam, Mohammed Anan
Through extensive set of experiments on different datasets, we show that (1) adversarial examples generated by ManiGen can mislead standalone classifiers by being as successful as the state-of-the-art white-box generator, Carlini, and (2) adversarial examples generated by ManiGen can more effectively attack classifiers with state-of-the-art defenses.
no code implementations • 22 Feb 2020 • Guanxiong Liu, Issa Khalil, Abdallah Khreishah
Single-Step adversarial training methods have been proposed as computationally viable solutions, however they still fail to defend against iterative adversarial examples.
no code implementations • 27 Jun 2019 • Guanxiong Liu, Issa Khalil, Abdallah Khreishah
Due to the surprisingly good representation power of complex distributions, neural network (NN) classifiers are widely used in many tasks which include natural language processing, computer vision and cyber security.
1 code implementation • 17 Apr 2019 • Guanxiong Liu, Issa Khalil, Abdallah Khreishah
Neural Network classifiers have been used successfully in a wide range of applications.
no code implementations • 6 Mar 2019 • Guanxiong Liu, Issa Khalil, Abdallah Khreishah
Machine learning models, especially neural network (NN) classifiers, are widely used in many applications including natural language processing, computer vision and cybersecurity.
no code implementations • 7 Mar 2017 • Xiansheng Guo, Sihua Shao, Nirwan Ansari, Abdallah Khreishah
A multiple classifiers fusion localization technique using received signal strengths (RSSs) of visible light is proposed, in which the proposed system transmits different intensity modulated sinusoidal signals by LEDs and the signals received by a Photo Diode (PD) placed at various grid points.