no code implementations • 13 Mar 2024 • Kang Gu, Md Rafi Ur Rashid, Najrin Sultana, Shagufta Mehnaz
With the rapid development of Large Language Models (LLMs), we have witnessed intense competition among the major LLM products like ChatGPT, LLaMa, and Gemini.
no code implementations • 3 Nov 2023 • Zeyu Song, Ehsanul Kabir, Shagufta Mehnaz
Graph Neural Networks (GNNs) have increasingly become an indispensable tool in learning from graph-structured data, catering to various applications including social network analysis, recommendation systems, etc.
no code implementations • 24 Oct 2023 • Md Rafi Ur Rashid, Vishnu Asutosh Dasu, Kang Gu, Najrin Sultana, Shagufta Mehnaz
Federated learning (FL) is becoming a key component in many technology-based applications including language modeling -- where individual FL participants often have privacy-sensitive text data in their local datasets.
no code implementations • 10 Aug 2023 • Ehsanul Kabir, Zeyu Song, Md Rafi Ur Rashid, Shagufta Mehnaz
This highlights the need to design FL systems that are secure and robust against malicious participants' actions while also ensuring high utility, privacy of local data, and efficiency.
no code implementations • 23 Jan 2022 • Shagufta Mehnaz, Sayanton V. Dibbo, Ehsanul Kabir, Ninghui Li, Elisa Bertino
Increasing use of machine learning (ML) technologies in privacy-sensitive domains such as medical diagnoses, lifestyle predictions, and business decisions highlights the need to better understand if these ML technologies are introducing leakage of sensitive and proprietary training data.
no code implementations • 7 Dec 2020 • Shagufta Mehnaz, Ninghui Li, Elisa Bertino
In this paper, we focus on one kind of model inversion attacks, where the adversary knows non-sensitive attributes about instances in the training data and aims to infer the value of a sensitive attribute unknown to the adversary, using oracle access to the target classification model.