Search Results for author: Eugene Bagdasaryan

Found 12 papers, 10 papers with code

Adversarial Illusions in Multi-Modal Embeddings

1 code implementation22 Aug 2023 Tingwei Zhang, Rishi Jha, Eugene Bagdasaryan, Vitaly Shmatikov

In this paper, we show that multi-modal embeddings can be vulnerable to an attack we call "adversarial illusions."

Image Generation Text Generation +1

Abusing Images and Sounds for Indirect Instruction Injection in Multi-Modal LLMs

1 code implementation19 Jul 2023 Eugene Bagdasaryan, Tsung-Yin Hsieh, Ben Nassi, Vitaly Shmatikov

We demonstrate how images and sounds can be used for indirect prompt and instruction injection in multi-modal LLMs.

Mithridates: Auditing and Boosting Backdoor Resistance of Machine Learning Pipelines

1 code implementation9 Feb 2023 Eugene Bagdasaryan, Vitaly Shmatikov

Given the variety of potential backdoor attacks, ML engineers who are not security experts have no way to measure how vulnerable their current training pipelines are, nor do they have a practical way to compare training configurations so as to pick the more resistant ones.

AutoML Federated Learning

Training a Tokenizer for Free with Private Federated Learning

no code implementations15 Mar 2022 Eugene Bagdasaryan, Congzheng Song, Rogier Van Dalen, Matt Seigel, Áine Cahill

During private federated learning of the language model, we sample from the model, train a new tokenizer on the sampled sequences, and update the model embeddings.

Federated Learning Language Modelling

Spinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures

1 code implementation9 Dec 2021 Eugene Bagdasaryan, Vitaly Shmatikov

Whereas conventional backdoors cause models to produce incorrect outputs on inputs with the trigger, outputs of spinned models preserve context and maintain standard accuracy metrics, yet also satisfy a meta-task chosen by the adversary.

Text Generation

Spinning Sequence-to-Sequence Models with Meta-Backdoors

no code implementations22 Jul 2021 Eugene Bagdasaryan, Vitaly Shmatikov

We introduce the concept of a "meta-backdoor" to explain model-spinning attacks.

Sentiment Analysis

Blind Backdoors in Deep Learning Models

1 code implementation8 May 2020 Eugene Bagdasaryan, Vitaly Shmatikov

We investigate a new method for injecting backdoors into machine learning models, based on compromising the loss-value computation in the model-training code.

Policy-Based Federated Learning

2 code implementations14 Mar 2020 Kleomenis Katevas, Eugene Bagdasaryan, Jason Waterman, Mohamad Mounir Safadieh, Eleanor Birrell, Hamed Haddadi, Deborah Estrin

In this paper we present PoliFL, a decentralized, edge-based framework that supports heterogeneous privacy policies for federated learning.

Federated Learning Image Classification

Salvaging Federated Learning by Local Adaptation

2 code implementations12 Feb 2020 Tao Yu, Eugene Bagdasaryan, Vitaly Shmatikov

First, we show that on standard tasks such as next-word prediction, many participants gain no benefit from FL because the federated model is less accurate on their data than the models they can train locally on their own.

Federated Learning Knowledge Distillation +1

How To Backdoor Federated Learning

3 code implementations2 Jul 2018 Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, Vitaly Shmatikov

An attacker selected in a single round of federated learning can cause the global model to immediately reach 100% accuracy on the backdoor task.

Anomaly Detection Data Poisoning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.