Search Results for author: Ashwinee Panda

Found 9 papers, 3 papers with code

Communication-Efficient Federated Learning with Sketching

no code implementations ICML 2020 Daniel Rothchild, Ashwinee Panda, Enayat Ullah, Nikita Ivkin, Vladimir Braverman, Joseph Gonzalez, Ion Stoica, Raman Arora

A key insight in the design of FedSketchedSGD is that, because the Count Sketch is linear, momentum and error accumulation can both be carried out within the sketch.

Federated Learning

Teach LLMs to Phish: Stealing Private Information from Language Models

no code implementations1 Mar 2024 Ashwinee Panda, Christopher A. Choquette-Choo, Zhengming Zhang, Yaoqing Yang, Prateek Mittal

When large language models are trained on private data, it can be a significant privacy risk for them to memorize and regurgitate sensitive information.

Private Fine-tuning of Large Language Models with Zeroth-order Optimization

no code implementations9 Jan 2024 Xinyu Tang, Ashwinee Panda, Milad Nasr, Saeed Mahloujifar, Prateek Mittal

We introduce DP-ZO, a new method for fine-tuning large language models that preserves the privacy of training data by privatizing zeroth-order optimization.

Privacy Preserving

Visual Adversarial Examples Jailbreak Aligned Large Language Models

1 code implementation22 Jun 2023 Xiangyu Qi, Kaixuan Huang, Ashwinee Panda, Peter Henderson, Mengdi Wang, Prateek Mittal

Recently, there has been a surge of interest in integrating vision into Large Language Models (LLMs), exemplified by Visual Language Models (VLMs) such as Flamingo and GPT-4.

Privacy-Preserving In-Context Learning for Large Language Models

no code implementations2 May 2023 Tong Wu, Ashwinee Panda, Jiachen T. Wang, Prateek Mittal

Based on the general paradigm of DP-ICL, we instantiate several techniques showing how to privatize ICL for text classification and language generation.

In-Context Learning Privacy Preserving +3

DP-RAFT: A Differentially Private Recipe for Accelerated Fine-Tuning

no code implementations8 Dec 2022 Ashwinee Panda, Xinyu Tang, Vikash Sehwag, Saeed Mahloujifar, Prateek Mittal

A major direction in differentially private machine learning is differentially private fine-tuning: pretraining a model on a source of "public data" and transferring the extracted features to downstream tasks.

Image Classification

Neurotoxin: Durable Backdoors in Federated Learning

2 code implementations12 Jun 2022 Zhengming Zhang, Ashwinee Panda, Linyue Song, Yaoqing Yang, Michael W. Mahoney, Joseph E. Gonzalez, Kannan Ramchandran, Prateek Mittal

In this type of attack, the goal of the attacker is to use poisoned updates to implant so-called backdoors into the learned model such that, at test time, the model's outputs can be fixed to a given target for certain inputs.

Backdoor Attack Federated Learning +1

SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification

1 code implementation12 Dec 2021 Ashwinee Panda, Saeed Mahloujifar, Arjun N. Bhagoji, Supriyo Chakraborty, Prateek Mittal

Federated learning is inherently vulnerable to model poisoning attacks because its decentralized nature allows attackers to participate with compromised devices.

Federated Learning Model Poisoning

FetchSGD: Communication-Efficient Federated Learning with Sketching

no code implementations15 Jul 2020 Daniel Rothchild, Ashwinee Panda, Enayat Ullah, Nikita Ivkin, Ion Stoica, Vladimir Braverman, Joseph Gonzalez, Raman Arora

A key insight in the design of FetchSGD is that, because the Count Sketch is linear, momentum and error accumulation can both be carried out within the sketch.

Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.