Search Results for author: NhatHai Phan

Found 19 papers, 10 papers with code

Multi-Instance Adversarial Attack on GNN-Based Malicious Domain Detection

1 code implementation22 Aug 2023 Mahmoud Nazzal, Issa Khalil, Abdallah Khreishah, NhatHai Phan, Yao Ma

In this work, we call the attack that manipulates several nodes in the DMG concurrently a multi-instance evasion attack.

Adversarial Attack

FairDP: Certified Fairness with Differential Privacy

no code implementations25 May 2023 Khang Tran, Ferdinando Fioretto, Issa Khalil, My T. Thai, NhatHai Phan

This paper introduces FairDP, a novel mechanism designed to achieve certified fairness with differential privacy (DP).

Fairness

Zone-based Federated Learning for Mobile Sensing Data

no code implementations10 Mar 2023 Xiaopeng Jiang, Thinh On, NhatHai Phan, Hessamaldin Mohammadi, Vijaya Datta Mayyuri, An Chen, Ruoming Jin, Cristian Borcea

However, currently there is no mobile sensing DL system that simultaneously achieves good model accuracy while adapting to user mobility behavior, scales well as the number of users increases, and protects user data privacy.

Federated Learning Human Activity Recognition

Active Membership Inference Attack under Local Differential Privacy in Federated Learning

1 code implementation24 Feb 2023 Truc Nguyen, Phung Lai, Khang Tran, NhatHai Phan, My T. Thai

Federated learning (FL) was originally regarded as a framework for collaborative learning among clients with data privacy protection through a coordinating server.

Federated Learning Inference Attack +2

XRand: Differentially Private Defense against Explanation-Guided Attacks

no code implementations8 Dec 2022 Truc Nguyen, Phung Lai, NhatHai Phan, My T. Thai

Recent development in the field of explainable artificial intelligence (XAI) has helped improve trust in Machine-Learning-as-a-Service (MLaaS) systems, in which an explanation is provided together with the model prediction in response to each query.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

Heterogeneous Randomized Response for Differential Privacy in Graph Neural Networks

1 code implementation10 Nov 2022 Khang Tran, Phung Lai, NhatHai Phan, Issa Khalil, Yao Ma, Abdallah Khreishah, My Thai, Xintao Wu

Graph neural networks (GNNs) are susceptible to privacy inference attacks (PIAs), given their ability to learn joint representation from features and edges among nodes in graph data.

User-Entity Differential Privacy in Learning Natural Language Models

1 code implementation1 Nov 2022 Phung Lai, NhatHai Phan, Tong Sun, Rajiv Jain, Franck Dernoncourt, Jiuxiang Gu, Nikolaos Barmpalios

In this paper, we introduce a novel concept of user-entity differential privacy (UeDP) to provide formal privacy protection simultaneously to both sensitive entities in textual data and data owners in learning natural language models (NLMs).

Lifelong DP: Consistently Bounded Differential Privacy in Lifelong Machine Learning

1 code implementation26 Jul 2022 Phung Lai, Han Hu, NhatHai Phan, Ruoming Jin, My T. Thai, An M. Chen

In this paper, we show that the process of continually learning new tasks and memorizing previous tasks introduces unknown privacy risks and challenges to bound the privacy loss.

BIG-bench Machine Learning

How to Backdoor HyperNetwork in Personalized Federated Learning?

no code implementations18 Jan 2022 Phung Lai, NhatHai Phan, Issa Khalil, Abdallah Khreishah, Xintao Wu

This paper explores previously unknown backdoor risks in HyperNet-based personalized federated learning (HyperNetFL) through poisoning attacks.

Data Poisoning Personalized Federated Learning

FLSys: Toward an Open Ecosystem for Federated Learning Mobile Apps

no code implementations17 Nov 2021 Xiaopeng Jiang, Han Hu, Vijaya Datta Mayyuri, An Chen, Devu M. Shila, Adriaan Larmuseau, Ruoming Jin, Cristian Borcea, NhatHai Phan

This article presents the design, implementation, and evaluation of FLSys, a mobile-cloud federated learning (FL) system, which can be a key component for an open ecosystem of FL models and apps.

Data Augmentation Federated Learning +3

Continual Learning with Differential Privacy

1 code implementation11 Oct 2021 Pradnya Desai, Phung Lai, NhatHai Phan, My T. Thai

In this paper, we focus on preserving differential privacy (DP) in continual learning (CL), in which we train ML models to learn a sequence of new tasks while memorizing previous tasks.

Continual Learning

A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples

no code implementations3 Sep 2021 Guanxiong Liu, Issa Khalil, Abdallah Khreishah, NhatHai Phan

In this work, we show how to jointly exploit adversarial perturbation and model poisoning vulnerabilities to practically launch a new stealthy attack, dubbed AdvTrojan.

Federated Learning Model Poisoning

Ontology-based Interpretable Machine Learning for Textual Data

2 code implementations1 Apr 2020 Phung Lai, NhatHai Phan, Han Hu, Anuja Badeti, David Newman, Dejing Dou

In this paper, we introduce a novel interpreting framework that learns an interpretable model based on an ontology-based sampling technique to explain agnostic prediction models.

BIG-bench Machine Learning Interpretable Machine Learning

Differential Privacy in Adversarial Learning with Provable Robustness

no code implementations25 Sep 2019 NhatHai Phan, My T. Thai, Ruoming Jin, Han Hu, Dejing Dou

In this paper, we aim to develop a novel mechanism to preserve differential privacy (DP) in adversarial learning for deep neural networks, with provable robustness to adversarial examples.

c-Eval: A Unified Metric to Evaluate Feature-based Explanations via Perturbation

no code implementations5 Jun 2019 Minh N. Vu, Truc D. Nguyen, NhatHai Phan, Ralucca Gera, My T. Thai

Given a classifier's prediction and the corresponding explanation on that prediction, c-Eval is the minimum-distortion perturbation that successfully alters the prediction while keeping the explanation's features unchanged.

Image Classification

Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness

4 code implementations2 Jun 2019 NhatHai Phan, Minh Vu, Yang Liu, Ruoming Jin, Dejing Dou, Xintao Wu, My T. Thai

In this paper, we propose a novel Heterogeneous Gaussian Mechanism (HGM) to preserve differential privacy in deep neural networks, with provable robustness against adversarial examples.

Preserving Differential Privacy in Adversarial Learning with Provable Robustness

no code implementations23 Mar 2019 NhatHai Phan, My T. Thai, Ruoming Jin, Han Hu, Dejing Dou

In this paper, we aim to develop a novel mechanism to preserve differential privacy (DP) in adversarial learning for deep neural networks, with provable robustness to adversarial examples.

Cryptography and Security

Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning

2 code implementations18 Sep 2017 NhatHai Phan, Xintao Wu, Han Hu, Dejing Dou

In this paper, we focus on developing a novel mechanism to preserve differential privacy in deep neural networks, such that: (1) The privacy budget consumption is totally independent of the number of training steps; (2) It has the ability to adaptively inject noise into features based on the contribution of each to the output; and (3) It could be applied in a variety of different deep neural networks.

Preserving Differential Privacy in Convolutional Deep Belief Networks

2 code implementations25 Jun 2017 NhatHai Phan, Xintao Wu, Dejing Dou

However, only a few scientific studies on preserving privacy in deep learning have been conducted.

Cannot find the paper you are looking for? You can Submit a new open access paper.