1 code implementation • 22 Aug 2023 • Mahmoud Nazzal, Issa Khalil, Abdallah Khreishah, NhatHai Phan, Yao Ma
In this work, we call the attack that manipulates several nodes in the DMG concurrently a multi-instance evasion attack.
no code implementations • 25 May 2023 • Khang Tran, Ferdinando Fioretto, Issa Khalil, My T. Thai, NhatHai Phan
This paper introduces FairDP, a novel mechanism designed to achieve certified fairness with differential privacy (DP).
1 code implementation • 10 Nov 2022 • Khang Tran, Phung Lai, NhatHai Phan, Issa Khalil, Yao Ma, Abdallah Khreishah, My Thai, Xintao Wu
Graph neural networks (GNNs) are susceptible to privacy inference attacks (PIAs), given their ability to learn joint representation from features and edges among nodes in graph data.
no code implementations • 1 Oct 2022 • Sanjay Chawla, Preslav Nakov, Ahmed Ali, Wendy Hall, Issa Khalil, Xiaosong Ma, Husrev Taha Sencar, Ingmar Weber, Michael Wooldridge, Ting Yu
The rise of attention networks, self-supervised learning, generative modeling, and graph neural networks has widened the application space of AI.
no code implementations • 5 Sep 2022 • Guanxiong Liu, Abdallah Khreishah, Fatima Sharadgah, Issa Khalil
Through mathematical analysis, we show that if the attacker is perfect in injecting the backdoor, the Trojan infected model will be trained to learn the appropriate prediction confidence bound, which is used to distinguish Trojan and benign inputs under arbitrary perturbations.
no code implementations • 18 Jan 2022 • Phung Lai, NhatHai Phan, Issa Khalil, Abdallah Khreishah, Xintao Wu
This paper explores previously unknown backdoor risks in HyperNet-based personalized federated learning (HyperNetFL) through poisoning attacks.
no code implementations • 3 Sep 2021 • Guanxiong Liu, Issa Khalil, Abdallah Khreishah, NhatHai Phan
In this work, we show how to jointly exploit adversarial perturbation and model poisoning vulnerabilities to practically launch a new stealthy attack, dubbed AdvTrojan.
no code implementations • 1 Jan 2021 • Guanxiong Liu, Issa Khalil, Abdallah Khreishah, Hai Phan
In this work, we naturally unify adversarial examples and Trojan backdoors into a new stealthy attack, that is activated only when 1) adversarial perturbation is injected into the input examples and 2) a Trojan backdoor is used to poison the training process simultaneously.
1 code implementation • 27 Dec 2020 • Lun-Pin Yuan, Euijin Choo, Ting Yu, Issa Khalil, Sencun Zhu
Autoencoder-based anomaly detection methods have been used in identifying anomalous users from large-scale enterprise logs with the assumption that adversarial activities do not follow past habitual patterns.
no code implementations • 12 Nov 2020 • Mustafa Abdallah, Daniel Woods, Parinaz Naghizadeh, Issa Khalil, Timothy Cason, Shreyas Sundaram, Saurabh Bagchi
We model the behavioral biases of human decision-making in securing interdependent systems and show that such behavioral decision-making leads to a suboptimal pattern of resource allocation compared to non-behavioral (rational) decision-making.
no code implementations • 11 Jul 2020 • Guanxiong Liu, Issa Khalil, Abdallah Khreishah, Abdulelah Algosaibi, Adel Aldalbahi, Mohammed Alaneem, Abdulaziz Alhumam, Mohammed Anan
Through extensive set of experiments on different datasets, we show that (1) adversarial examples generated by ManiGen can mislead standalone classifiers by being as successful as the state-of-the-art white-box generator, Carlini, and (2) adversarial examples generated by ManiGen can more effectively attack classifiers with state-of-the-art defenses.
no code implementations • 4 Apr 2020 • Mustafa Abdallah, Daniel Woods, Parinaz Naghizadeh, Issa Khalil, Timothy Cason, Shreyas Sundaram, Saurabh Bagchi
We model the security investment decisions made by the defenders as a security game.
Cryptography and Security Computer Science and Game Theory
no code implementations • 22 Feb 2020 • Guanxiong Liu, Issa Khalil, Abdallah Khreishah
Single-Step adversarial training methods have been proposed as computationally viable solutions, however they still fail to defend against iterative adversarial examples.
no code implementations • 27 Jun 2019 • Guanxiong Liu, Issa Khalil, Abdallah Khreishah
Due to the surprisingly good representation power of complex distributions, neural network (NN) classifiers are widely used in many tasks which include natural language processing, computer vision and cyber security.
1 code implementation • 17 Apr 2019 • Guanxiong Liu, Issa Khalil, Abdallah Khreishah
Neural Network classifiers have been used successfully in a wide range of applications.
no code implementations • 6 Mar 2019 • Guanxiong Liu, Issa Khalil, Abdallah Khreishah
Machine learning models, especially neural network (NN) classifiers, are widely used in many applications including natural language processing, computer vision and cybersecurity.