Search Results for author: Phung Lai

Found 9 papers, 6 papers with code

Active Membership Inference Attack under Local Differential Privacy in Federated Learning

1 code implementation24 Feb 2023 Truc Nguyen, Phung Lai, Khang Tran, NhatHai Phan, My T. Thai

Federated learning (FL) was originally regarded as a framework for collaborative learning among clients with data privacy protection through a coordinating server.

Federated Learning Inference Attack +2

XRand: Differentially Private Defense against Explanation-Guided Attacks

no code implementations8 Dec 2022 Truc Nguyen, Phung Lai, NhatHai Phan, My T. Thai

Recent development in the field of explainable artificial intelligence (XAI) has helped improve trust in Machine-Learning-as-a-Service (MLaaS) systems, in which an explanation is provided together with the model prediction in response to each query.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

Heterogeneous Randomized Response for Differential Privacy in Graph Neural Networks

1 code implementation10 Nov 2022 Khang Tran, Phung Lai, NhatHai Phan, Issa Khalil, Yao Ma, Abdallah Khreishah, My Thai, Xintao Wu

Graph neural networks (GNNs) are susceptible to privacy inference attacks (PIAs), given their ability to learn joint representation from features and edges among nodes in graph data.

User-Entity Differential Privacy in Learning Natural Language Models

1 code implementation1 Nov 2022 Phung Lai, NhatHai Phan, Tong Sun, Rajiv Jain, Franck Dernoncourt, Jiuxiang Gu, Nikolaos Barmpalios

In this paper, we introduce a novel concept of user-entity differential privacy (UeDP) to provide formal privacy protection simultaneously to both sensitive entities in textual data and data owners in learning natural language models (NLMs).

Lifelong DP: Consistently Bounded Differential Privacy in Lifelong Machine Learning

1 code implementation26 Jul 2022 Phung Lai, Han Hu, NhatHai Phan, Ruoming Jin, My T. Thai, An M. Chen

In this paper, we show that the process of continually learning new tasks and memorizing previous tasks introduces unknown privacy risks and challenges to bound the privacy loss.

BIG-bench Machine Learning

How to Backdoor HyperNetwork in Personalized Federated Learning?

no code implementations18 Jan 2022 Phung Lai, NhatHai Phan, Issa Khalil, Abdallah Khreishah, Xintao Wu

This paper explores previously unknown backdoor risks in HyperNet-based personalized federated learning (HyperNetFL) through poisoning attacks.

Data Poisoning Personalized Federated Learning

Continual Learning with Differential Privacy

1 code implementation11 Oct 2021 Pradnya Desai, Phung Lai, NhatHai Phan, My T. Thai

In this paper, we focus on preserving differential privacy (DP) in continual learning (CL), in which we train ML models to learn a sequence of new tasks while memorizing previous tasks.

Continual Learning

Bit-aware Randomized Response for Local Differential Privacy in Federated Learning

no code implementations29 Sep 2021 Phung Lai, Hai Phan, Li Xiong, Khang Phuc Tran, My Thai, Tong Sun, Franck Dernoncourt, Jiuxiang Gu, Nikolaos Barmpalios, Rajiv Jain

In this paper, we develop BitRand, a bit-aware randomized response algorithm, to preserve local differential privacy (LDP) in federated learning (FL).

Federated Learning Image Classification

Ontology-based Interpretable Machine Learning for Textual Data

2 code implementations1 Apr 2020 Phung Lai, NhatHai Phan, Han Hu, Anuja Badeti, David Newman, Dejing Dou

In this paper, we introduce a novel interpreting framework that learns an interpretable model based on an ontology-based sampling technique to explain agnostic prediction models.

BIG-bench Machine Learning Interpretable Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.