1 code implementation • 24 Feb 2023 • Truc Nguyen, Phung Lai, Khang Tran, NhatHai Phan, My T. Thai
Federated learning (FL) was originally regarded as a framework for collaborative learning among clients with data privacy protection through a coordinating server.
no code implementations • 8 Dec 2022 • Truc Nguyen, Phung Lai, NhatHai Phan, My T. Thai
Recent development in the field of explainable artificial intelligence (XAI) has helped improve trust in Machine-Learning-as-a-Service (MLaaS) systems, in which an explanation is provided together with the model prediction in response to each query.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
1 code implementation • 10 Nov 2022 • Khang Tran, Phung Lai, NhatHai Phan, Issa Khalil, Yao Ma, Abdallah Khreishah, My Thai, Xintao Wu
Graph neural networks (GNNs) are susceptible to privacy inference attacks (PIAs), given their ability to learn joint representation from features and edges among nodes in graph data.
1 code implementation • 1 Nov 2022 • Phung Lai, NhatHai Phan, Tong Sun, Rajiv Jain, Franck Dernoncourt, Jiuxiang Gu, Nikolaos Barmpalios
In this paper, we introduce a novel concept of user-entity differential privacy (UeDP) to provide formal privacy protection simultaneously to both sensitive entities in textual data and data owners in learning natural language models (NLMs).
1 code implementation • 26 Jul 2022 • Phung Lai, Han Hu, NhatHai Phan, Ruoming Jin, My T. Thai, An M. Chen
In this paper, we show that the process of continually learning new tasks and memorizing previous tasks introduces unknown privacy risks and challenges to bound the privacy loss.
no code implementations • 18 Jan 2022 • Phung Lai, NhatHai Phan, Issa Khalil, Abdallah Khreishah, Xintao Wu
This paper explores previously unknown backdoor risks in HyperNet-based personalized federated learning (HyperNetFL) through poisoning attacks.
1 code implementation • 11 Oct 2021 • Pradnya Desai, Phung Lai, NhatHai Phan, My T. Thai
In this paper, we focus on preserving differential privacy (DP) in continual learning (CL), in which we train ML models to learn a sequence of new tasks while memorizing previous tasks.
no code implementations • 29 Sep 2021 • Phung Lai, Hai Phan, Li Xiong, Khang Phuc Tran, My Thai, Tong Sun, Franck Dernoncourt, Jiuxiang Gu, Nikolaos Barmpalios, Rajiv Jain
In this paper, we develop BitRand, a bit-aware randomized response algorithm, to preserve local differential privacy (LDP) in federated learning (FL).
2 code implementations • 1 Apr 2020 • Phung Lai, NhatHai Phan, Han Hu, Anuja Badeti, David Newman, Dejing Dou
In this paper, we introduce a novel interpreting framework that learns an interpretable model based on an ontology-based sampling technique to explain agnostic prediction models.