Search Results for author: Issa Khalil

Found 16 papers, 4 papers with code

Multi-Instance Adversarial Attack on GNN-Based Malicious Domain Detection

1 code implementation22 Aug 2023 Mahmoud Nazzal, Issa Khalil, Abdallah Khreishah, NhatHai Phan, Yao Ma

In this work, we call the attack that manipulates several nodes in the DMG concurrently a multi-instance evasion attack.

Adversarial Attack

FairDP: Certified Fairness with Differential Privacy

no code implementations25 May 2023 Khang Tran, Ferdinando Fioretto, Issa Khalil, My T. Thai, NhatHai Phan

This paper introduces FairDP, a novel mechanism designed to achieve certified fairness with differential privacy (DP).

Fairness

Heterogeneous Randomized Response for Differential Privacy in Graph Neural Networks

1 code implementation10 Nov 2022 Khang Tran, Phung Lai, NhatHai Phan, Issa Khalil, Yao Ma, Abdallah Khreishah, My Thai, Xintao Wu

Graph neural networks (GNNs) are susceptible to privacy inference attacks (PIAs), given their ability to learn joint representation from features and edges among nodes in graph data.

Ten Years after ImageNet: A 360° Perspective on AI

no code implementations1 Oct 2022 Sanjay Chawla, Preslav Nakov, Ahmed Ali, Wendy Hall, Issa Khalil, Xiaosong Ma, Husrev Taha Sencar, Ingmar Weber, Michael Wooldridge, Ting Yu

The rise of attention networks, self-supervised learning, generative modeling, and graph neural networks has widened the application space of AI.

Decision Making Fairness +1

An Adaptive Black-box Defense against Trojan Attacks (TrojDef)

no code implementations5 Sep 2022 Guanxiong Liu, Abdallah Khreishah, Fatima Sharadgah, Issa Khalil

Through mathematical analysis, we show that if the attacker is perfect in injecting the backdoor, the Trojan infected model will be trained to learn the appropriate prediction confidence bound, which is used to distinguish Trojan and benign inputs under arbitrary perturbations.

How to Backdoor HyperNetwork in Personalized Federated Learning?

no code implementations18 Jan 2022 Phung Lai, NhatHai Phan, Issa Khalil, Abdallah Khreishah, Xintao Wu

This paper explores previously unknown backdoor risks in HyperNet-based personalized federated learning (HyperNetFL) through poisoning attacks.

Data Poisoning Personalized Federated Learning

A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples

no code implementations3 Sep 2021 Guanxiong Liu, Issa Khalil, Abdallah Khreishah, NhatHai Phan

In this work, we show how to jointly exploit adversarial perturbation and model poisoning vulnerabilities to practically launch a new stealthy attack, dubbed AdvTrojan.

Federated Learning Model Poisoning

Trojans and Adversarial Examples: A Lethal Combination

no code implementations1 Jan 2021 Guanxiong Liu, Issa Khalil, Abdallah Khreishah, Hai Phan

In this work, we naturally unify adversarial examples and Trojan backdoors into a new stealthy attack, that is activated only when 1) adversarial perturbation is injected into the input examples and 2) a Trojan backdoor is used to poison the training process simultaneously.

Time-Window Group-Correlation Support vs. Individual Features: A Detection of Abnormal Users

1 code implementation27 Dec 2020 Lun-Pin Yuan, Euijin Choo, Ting Yu, Issa Khalil, Sencun Zhu

Autoencoder-based anomaly detection methods have been used in identifying anomalous users from large-scale enterprise logs with the assumption that adversarial activities do not follow past habitual patterns.

Anomaly Detection

Morshed: Guiding Behavioral Decision-Makers towards Better Security Investment in Interdependent Systems

no code implementations12 Nov 2020 Mustafa Abdallah, Daniel Woods, Parinaz Naghizadeh, Issa Khalil, Timothy Cason, Shreyas Sundaram, Saurabh Bagchi

We model the behavioral biases of human decision-making in securing interdependent systems and show that such behavioral decision-making leads to a suboptimal pattern of resource allocation compared to non-behavioral (rational) decision-making.

Decision Making

ManiGen: A Manifold Aided Black-box Generator of Adversarial Examples

no code implementations11 Jul 2020 Guanxiong Liu, Issa Khalil, Abdallah Khreishah, Abdulelah Algosaibi, Adel Aldalbahi, Mohammed Alaneem, Abdulaziz Alhumam, Mohammed Anan

Through extensive set of experiments on different datasets, we show that (1) adversarial examples generated by ManiGen can mislead standalone classifiers by being as successful as the state-of-the-art white-box generator, Carlini, and (2) adversarial examples generated by ManiGen can more effectively attack classifiers with state-of-the-art defenses.

BASCPS: How does behavioral decision making impact the security of cyber-physical systems?

no code implementations4 Apr 2020 Mustafa Abdallah, Daniel Woods, Parinaz Naghizadeh, Issa Khalil, Timothy Cason, Shreyas Sundaram, Saurabh Bagchi

We model the security investment decisions made by the defenders as a security game.

Cryptography and Security Computer Science and Game Theory

Using Single-Step Adversarial Training to Defend Iterative Adversarial Examples

no code implementations22 Feb 2020 Guanxiong Liu, Issa Khalil, Abdallah Khreishah

Single-Step adversarial training methods have been proposed as computationally viable solutions, however they still fail to defend against iterative adversarial examples.

Using Intuition from Empirical Properties to Simplify Adversarial Training Defense

no code implementations27 Jun 2019 Guanxiong Liu, Issa Khalil, Abdallah Khreishah

Due to the surprisingly good representation power of complex distributions, neural network (NN) classifiers are widely used in many tasks which include natural language processing, computer vision and cyber security.

ZK-GanDef: A GAN based Zero Knowledge Adversarial Training Defense for Neural Networks

1 code implementation17 Apr 2019 Guanxiong Liu, Issa Khalil, Abdallah Khreishah

Neural Network classifiers have been used successfully in a wide range of applications.

GanDef: A GAN based Adversarial Training Defense for Neural Network Classifier

no code implementations6 Mar 2019 Guanxiong Liu, Issa Khalil, Abdallah Khreishah

Machine learning models, especially neural network (NN) classifiers, are widely used in many applications including natural language processing, computer vision and cybersecurity.

feature selection Overall - Test

Cannot find the paper you are looking for? You can Submit a new open access paper.