1 code implementation • 22 Aug 2023 • Mahmoud Nazzal, Issa Khalil, Abdallah Khreishah, NhatHai Phan, Yao Ma
In this work, we call the attack that manipulates several nodes in the DMG concurrently a multi-instance evasion attack.
no code implementations • 25 May 2023 • Khang Tran, Ferdinando Fioretto, Issa Khalil, My T. Thai, NhatHai Phan
This paper introduces FairDP, a novel mechanism designed to achieve certified fairness with differential privacy (DP).
no code implementations • 10 Mar 2023 • Xiaopeng Jiang, Thinh On, NhatHai Phan, Hessamaldin Mohammadi, Vijaya Datta Mayyuri, An Chen, Ruoming Jin, Cristian Borcea
However, currently there is no mobile sensing DL system that simultaneously achieves good model accuracy while adapting to user mobility behavior, scales well as the number of users increases, and protects user data privacy.
1 code implementation • 24 Feb 2023 • Truc Nguyen, Phung Lai, Khang Tran, NhatHai Phan, My T. Thai
Federated learning (FL) was originally regarded as a framework for collaborative learning among clients with data privacy protection through a coordinating server.
no code implementations • 8 Dec 2022 • Truc Nguyen, Phung Lai, NhatHai Phan, My T. Thai
Recent development in the field of explainable artificial intelligence (XAI) has helped improve trust in Machine-Learning-as-a-Service (MLaaS) systems, in which an explanation is provided together with the model prediction in response to each query.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
1 code implementation • 10 Nov 2022 • Khang Tran, Phung Lai, NhatHai Phan, Issa Khalil, Yao Ma, Abdallah Khreishah, My Thai, Xintao Wu
Graph neural networks (GNNs) are susceptible to privacy inference attacks (PIAs), given their ability to learn joint representation from features and edges among nodes in graph data.
1 code implementation • 1 Nov 2022 • Phung Lai, NhatHai Phan, Tong Sun, Rajiv Jain, Franck Dernoncourt, Jiuxiang Gu, Nikolaos Barmpalios
In this paper, we introduce a novel concept of user-entity differential privacy (UeDP) to provide formal privacy protection simultaneously to both sensitive entities in textual data and data owners in learning natural language models (NLMs).
1 code implementation • 26 Jul 2022 • Phung Lai, Han Hu, NhatHai Phan, Ruoming Jin, My T. Thai, An M. Chen
In this paper, we show that the process of continually learning new tasks and memorizing previous tasks introduces unknown privacy risks and challenges to bound the privacy loss.
no code implementations • 18 Jan 2022 • Phung Lai, NhatHai Phan, Issa Khalil, Abdallah Khreishah, Xintao Wu
This paper explores previously unknown backdoor risks in HyperNet-based personalized federated learning (HyperNetFL) through poisoning attacks.
no code implementations • 17 Nov 2021 • Xiaopeng Jiang, Han Hu, Vijaya Datta Mayyuri, An Chen, Devu M. Shila, Adriaan Larmuseau, Ruoming Jin, Cristian Borcea, NhatHai Phan
This article presents the design, implementation, and evaluation of FLSys, a mobile-cloud federated learning (FL) system, which can be a key component for an open ecosystem of FL models and apps.
1 code implementation • 11 Oct 2021 • Pradnya Desai, Phung Lai, NhatHai Phan, My T. Thai
In this paper, we focus on preserving differential privacy (DP) in continual learning (CL), in which we train ML models to learn a sequence of new tasks while memorizing previous tasks.
no code implementations • 3 Sep 2021 • Guanxiong Liu, Issa Khalil, Abdallah Khreishah, NhatHai Phan
In this work, we show how to jointly exploit adversarial perturbation and model poisoning vulnerabilities to practically launch a new stealthy attack, dubbed AdvTrojan.
2 code implementations • 1 Apr 2020 • Phung Lai, NhatHai Phan, Han Hu, Anuja Badeti, David Newman, Dejing Dou
In this paper, we introduce a novel interpreting framework that learns an interpretable model based on an ontology-based sampling technique to explain agnostic prediction models.
no code implementations • 25 Sep 2019 • NhatHai Phan, My T. Thai, Ruoming Jin, Han Hu, Dejing Dou
In this paper, we aim to develop a novel mechanism to preserve differential privacy (DP) in adversarial learning for deep neural networks, with provable robustness to adversarial examples.
no code implementations • 5 Jun 2019 • Minh N. Vu, Truc D. Nguyen, NhatHai Phan, Ralucca Gera, My T. Thai
Given a classifier's prediction and the corresponding explanation on that prediction, c-Eval is the minimum-distortion perturbation that successfully alters the prediction while keeping the explanation's features unchanged.
4 code implementations • 2 Jun 2019 • NhatHai Phan, Minh Vu, Yang Liu, Ruoming Jin, Dejing Dou, Xintao Wu, My T. Thai
In this paper, we propose a novel Heterogeneous Gaussian Mechanism (HGM) to preserve differential privacy in deep neural networks, with provable robustness against adversarial examples.
no code implementations • 23 Mar 2019 • NhatHai Phan, My T. Thai, Ruoming Jin, Han Hu, Dejing Dou
In this paper, we aim to develop a novel mechanism to preserve differential privacy (DP) in adversarial learning for deep neural networks, with provable robustness to adversarial examples.
Cryptography and Security
2 code implementations • 18 Sep 2017 • NhatHai Phan, Xintao Wu, Han Hu, Dejing Dou
In this paper, we focus on developing a novel mechanism to preserve differential privacy in deep neural networks, such that: (1) The privacy budget consumption is totally independent of the number of training steps; (2) It has the ability to adaptively inject noise into features based on the contribution of each to the output; and (3) It could be applied in a variety of different deep neural networks.
2 code implementations • 25 Jun 2017 • NhatHai Phan, Xintao Wu, Dejing Dou
However, only a few scientific studies on preserving privacy in deep learning have been conducted.