Search Results for author: Shagufta Mehnaz

Found 6 papers, 0 papers with code

Second-Order Information Matters: Revisiting Machine Unlearning for Large Language Models

no code implementations13 Mar 2024 Kang Gu, Md Rafi Ur Rashid, Najrin Sultana, Shagufta Mehnaz

With the rapid development of Large Language Models (LLMs), we have witnessed intense competition among the major LLM products like ChatGPT, LLaMa, and Gemini.

Machine Unlearning

GNNBleed: Inference Attacks to Unveil Private Edges in Graphs with Realistic Access to GNN Models

no code implementations3 Nov 2023 Zeyu Song, Ehsanul Kabir, Shagufta Mehnaz

Graph Neural Networks (GNNs) have increasingly become an indispensable tool in learning from graph-structured data, catering to various applications including social network analysis, recommendation systems, etc.

Recommendation Systems

FLTrojan: Privacy Leakage Attacks against Federated Language Models Through Selective Weight Tampering

no code implementations24 Oct 2023 Md Rafi Ur Rashid, Vishnu Asutosh Dasu, Kang Gu, Najrin Sultana, Shagufta Mehnaz

Federated learning (FL) is becoming a key component in many technology-based applications including language modeling -- where individual FL participants often have privacy-sensitive text data in their local datasets.

Federated Learning Language Modelling

FLShield: A Validation Based Federated Learning Framework to Defend Against Poisoning Attacks

no code implementations10 Aug 2023 Ehsanul Kabir, Zeyu Song, Md Rafi Ur Rashid, Shagufta Mehnaz

This highlights the need to design FL systems that are secure and robust against malicious participants' actions while also ensuring high utility, privacy of local data, and efficiency.

Autonomous Vehicles Federated Learning

Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models

no code implementations23 Jan 2022 Shagufta Mehnaz, Sayanton V. Dibbo, Ehsanul Kabir, Ninghui Li, Elisa Bertino

Increasing use of machine learning (ML) technologies in privacy-sensitive domains such as medical diagnoses, lifestyle predictions, and business decisions highlights the need to better understand if these ML technologies are introducing leakage of sensitive and proprietary training data.

Attribute Inference Attack

Black-box Model Inversion Attribute Inference Attacks on Classification Models

no code implementations7 Dec 2020 Shagufta Mehnaz, Ninghui Li, Elisa Bertino

In this paper, we focus on one kind of model inversion attacks, where the adversary knows non-sensitive attributes about instances in the training data and aims to infer the value of a sensitive attribute unknown to the adversary, using oracle access to the target classification model.

Attribute Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.