Search Results for author: Abdallah Khreishah

Found 16 papers, 4 papers with code

Multi-Instance Adversarial Attack on GNN-Based Malicious Domain Detection

1 code implementation22 Aug 2023 Mahmoud Nazzal, Issa Khalil, Abdallah Khreishah, NhatHai Phan, Yao Ma

In this work, we call the attack that manipulates several nodes in the DMG concurrently a multi-instance evasion attack.

Adversarial Attack

Semi-decentralized Inference in Heterogeneous Graph Neural Networks for Traffic Demand Forecasting: An Edge-Computing Approach

1 code implementation28 Feb 2023 Mahmoud Nazzal, Abdallah Khreishah, Joyoung Lee, Shaahin Angizi, Ala Al-Fuqaha, Mohsen Guizani

This approach minimizes inter-cloudlet communication thereby alleviating the communication overhead of the decentralized approach while promoting scalability due to cloudlet-level decentralization.

Edge-computing

Heterogeneous Randomized Response for Differential Privacy in Graph Neural Networks

1 code implementation10 Nov 2022 Khang Tran, Phung Lai, NhatHai Phan, Issa Khalil, Yao Ma, Abdallah Khreishah, My Thai, Xintao Wu

Graph neural networks (GNNs) are susceptible to privacy inference attacks (PIAs), given their ability to learn joint representation from features and edges among nodes in graph data.

An Adaptive Black-box Defense against Trojan Attacks (TrojDef)

no code implementations5 Sep 2022 Guanxiong Liu, Abdallah Khreishah, Fatima Sharadgah, Issa Khalil

Through mathematical analysis, we show that if the attacker is perfect in injecting the backdoor, the Trojan infected model will be trained to learn the appropriate prediction confidence bound, which is used to distinguish Trojan and benign inputs under arbitrary perturbations.

How to Backdoor HyperNetwork in Personalized Federated Learning?

no code implementations18 Jan 2022 Phung Lai, NhatHai Phan, Issa Khalil, Abdallah Khreishah, Xintao Wu

This paper explores previously unknown backdoor risks in HyperNet-based personalized federated learning (HyperNetFL) through poisoning attacks.

Data Poisoning Personalized Federated Learning

Smart Traffic Monitoring System using Computer Vision and Edge Computing

no code implementations7 Sep 2021 Guanxiong Liu, Hang Shi, Abbas Kiani, Abdallah Khreishah, Jo Young Lee, Nirwan Ansari, Chengjun Liu, Mustafa Yousef

In this paper, we focus on two common traffic monitoring tasks, congestion detection, and speed detection, and propose a two-tier edge computing based model that takes into account of both the limited computing capability in cloudlets and the unstable network condition to the TMC.

Edge-computing Management

A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples

no code implementations3 Sep 2021 Guanxiong Liu, Issa Khalil, Abdallah Khreishah, NhatHai Phan

In this work, we show how to jointly exploit adversarial perturbation and model poisoning vulnerabilities to practically launch a new stealthy attack, dubbed AdvTrojan.

Federated Learning Model Poisoning

Trojans and Adversarial Examples: A Lethal Combination

no code implementations1 Jan 2021 Guanxiong Liu, Issa Khalil, Abdallah Khreishah, Hai Phan

In this work, we naturally unify adversarial examples and Trojan backdoors into a new stealthy attack, that is activated only when 1) adversarial perturbation is injected into the input examples and 2) a Trojan backdoor is used to poison the training process simultaneously.

ManiGen: A Manifold Aided Black-box Generator of Adversarial Examples

no code implementations11 Jul 2020 Guanxiong Liu, Issa Khalil, Abdallah Khreishah, Abdulelah Algosaibi, Adel Aldalbahi, Mohammed Alaneem, Abdulaziz Alhumam, Mohammed Anan

Through extensive set of experiments on different datasets, we show that (1) adversarial examples generated by ManiGen can mislead standalone classifiers by being as successful as the state-of-the-art white-box generator, Carlini, and (2) adversarial examples generated by ManiGen can more effectively attack classifiers with state-of-the-art defenses.

Using Single-Step Adversarial Training to Defend Iterative Adversarial Examples

no code implementations22 Feb 2020 Guanxiong Liu, Issa Khalil, Abdallah Khreishah

Single-Step adversarial training methods have been proposed as computationally viable solutions, however they still fail to defend against iterative adversarial examples.

Using Intuition from Empirical Properties to Simplify Adversarial Training Defense

no code implementations27 Jun 2019 Guanxiong Liu, Issa Khalil, Abdallah Khreishah

Due to the surprisingly good representation power of complex distributions, neural network (NN) classifiers are widely used in many tasks which include natural language processing, computer vision and cyber security.

ZK-GanDef: A GAN based Zero Knowledge Adversarial Training Defense for Neural Networks

1 code implementation17 Apr 2019 Guanxiong Liu, Issa Khalil, Abdallah Khreishah

Neural Network classifiers have been used successfully in a wide range of applications.

GanDef: A GAN based Adversarial Training Defense for Neural Network Classifier

no code implementations6 Mar 2019 Guanxiong Liu, Issa Khalil, Abdallah Khreishah

Machine learning models, especially neural network (NN) classifiers, are widely used in many applications including natural language processing, computer vision and cybersecurity.

feature selection Overall - Test

Indoor Localization Using Visible Light Via Fusion Of Multiple Classifiers

no code implementations7 Mar 2017 Xiansheng Guo, Sihua Shao, Nirwan Ansari, Abdallah Khreishah

A multiple classifiers fusion localization technique using received signal strengths (RSSs) of visible light is proposed, in which the proposed system transmits different intensity modulated sinusoidal signals by LEDs and the signals received by a Photo Diode (PD) placed at various grid points.

Indoor Localization

Cannot find the paper you are looking for? You can Submit a new open access paper.