Search Results for author: Virat Shejwalkar

Found 9 papers, 1 papers with code

Security Analysis of SplitFed Learning

no code implementations4 Dec 2022 Momin Ahmad Khan, Virat Shejwalkar, Amir Houmansadr, Fatima Muhammad Anwar

We observe that the model updates in SplitFed have significantly smaller dimensionality as compared to FL that is known to have the curse of dimensionality.

Federated Learning Model Poisoning

Recycling Scraps: Improving Private Learning by Leveraging Intermediate Checkpoints

no code implementations4 Oct 2022 Virat Shejwalkar, Arun Ganesh, Rajiv Mathews, Om Thakkar, Abhradeep Thakurta

Empirically, we show that the last few checkpoints can provide a reasonable lower bound for the variance of a converged DP model.

Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture

no code implementations15 Oct 2021 Xinyu Tang, Saeed Mahloujifar, Liwei Song, Virat Shejwalkar, Milad Nasr, Amir Houmansadr, Prateek Mittal

The goal of this work is to train ML models that have high membership privacy while largely preserving their utility; we therefore aim for an empirical membership privacy guarantee as opposed to the provable privacy guarantees provided by techniques like differential privacy, as such techniques are shown to deteriorate model utility.

Privacy Preserving

FRL: Federated Rank Learning

no code implementations8 Oct 2021 Hamid Mozaffari, Virat Shejwalkar, Amir Houmansadr

The FRL server uses a voting mechanism to aggregate the parameter rankings submitted by clients in each training epoch to generate the global ranking of the next training epoch.

Federated Learning

FSL: Federated Supermask Learning

no code implementations29 Sep 2021 Hamid Mozaffari, Virat Shejwalkar, Amir Houmansadr

FSL clients share local subnetworks in the form of rankings of network edges; more useful edges have higher ranks.

Federated Learning

Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated Learning

1 code implementation23 Aug 2021 Virat Shejwalkar, Amir Houmansadr, Peter Kairouz, Daniel Ramage

While recent works have indicated that federated learning (FL) may be vulnerable to poisoning attacks by compromised clients, their real impact on production FL systems is not fully understood.

Federated Learning Misconceptions +1

Cronus: Robust and Heterogeneous Collaborative Learning with Black-Box Knowledge Transfer

no code implementations24 Dec 2019 Hongyan Chang, Virat Shejwalkar, Reza Shokri, Amir Houmansadr

Collaborative (federated) learning enables multiple parties to train a model without sharing their private data, but through repeated sharing of the parameters of their local models.

Federated Learning Privacy Preserving +1

Membership Privacy for Machine Learning Models Through Knowledge Transfer

no code implementations15 Jun 2019 Virat Shejwalkar, Amir Houmansadr

Large capacity machine learning (ML) models are prone to membership inference attacks (MIAs), which aim to infer whether the target sample is a member of the target model's training dataset.

BIG-bench Machine Learning General Classification +4

Cannot find the paper you are looking for? You can Submit a new open access paper.