Search Results for author: Alina Oprea

Found 29 papers, 12 papers with code

User Inference Attacks on Large Language Models

no code implementations13 Oct 2023 Nikhil Kandpal, Krishna Pillutla, Alina Oprea, Peter Kairouz, Christopher A. Choquette-Choo, Zheng Xu

Fine-tuning is a common and effective method for tailoring large language models (LLMs) to specialized tasks and applications.

Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning

no code implementations5 Oct 2023 Harsh Chaudhari, Giorgio Severi, Alina Oprea, Jonathan Ullman

The integration of machine learning (ML) in numerous critical applications introduces a range of privacy concerns for individuals who provide their datasets for model training.

Data Poisoning

Dropout Attacks

no code implementations4 Sep 2023 Andrew Yuan, Alina Oprea, Cheng Tan

DROPOUTATTACK attacks the dropout operator by manipulating the selection of neurons to drop instead of selecting them uniformly at random.

Poisoning Network Flow Classifiers

no code implementations2 Jun 2023 Giorgio Severi, Simona Boboila, Alina Oprea, John Holodnak, Kendra Kratkiewicz, Jason Matterer

As machine learning (ML) classifiers increasingly oversee the automated monitoring of network traffic, studying their resilience against adversarial attacks becomes critical.

TMI! Finetuned Models Leak Private Information from their Pretraining Data

1 code implementation1 Jun 2023 John Abascal, Stanley Wu, Alina Oprea, Jonathan Ullman

In this work we propose a new membership-inference threat model where the adversary only has access to the finetuned model and would like to infer the membership of the pretraining data.

Transfer Learning

One-shot Empirical Privacy Estimation for Federated Learning

no code implementations6 Feb 2023 Galen Andrew, Peter Kairouz, Sewoong Oh, Alina Oprea, H. Brendan McMahan, Vinith Suriyakumar

Privacy estimation techniques for differentially private (DP) algorithms are useful for comparing against analytical bounds, or to empirically measure privacy loss in settings where known analytical bounds are not tight.

Federated Learning

Backdoor Attacks in Peer-to-Peer Federated Learning

no code implementations23 Jan 2023 Gokberk Yar, Simona Boboila, Cristina Nita-Rotaru, Alina Oprea

Most machine learning applications rely on centralized learning processes, opening up the risk of exposure of their training datasets.

Backdoor Attack Federated Learning

Network-Level Adversaries in Federated Learning

1 code implementation27 Aug 2022 Giorgio Severi, Matthew Jagielski, Gökberk Yar, Yuxuan Wang, Alina Oprea, Cristina Nita-Rotaru

Federated learning is a popular strategy for training models on distributed, sensitive data, while preserving data privacy.

Federated Learning

SNAP: Efficient Extraction of Private Properties with Poisoning

1 code implementation25 Aug 2022 Harsh Chaudhari, John Abascal, Alina Oprea, Matthew Jagielski, Florian Tramèr, Jonathan Ullman

Property inference attacks allow an adversary to extract global properties of the training dataset from a machine learning model.

Inference Attack

CELEST: Federated Learning for Globally Coordinated Threat Detection

no code implementations23 May 2022 Talha Ongun, Simona Boboila, Alina Oprea, Tina Eliassi-Rad, Jason Hiser, Jack Davidson

In this study, we propose CELEST (CollaborativE LEarning for Scalable Threat detection, a federated machine learning framework for global threat detection over HTTP, which is one of the most commonly used protocols for malware dissemination and communication.

Active Learning Federated Learning

SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning

no code implementations20 May 2022 Harsh Chaudhari, Matthew Jagielski, Alina Oprea

Secure multiparty computation (MPC) has been proposed to allow multiple mutually distrustful data owners to jointly train machine learning (ML) models on their combined data.

Backdoor Attack BIG-bench Machine Learning +2

How to Combine Membership-Inference Attacks on Multiple Updated Models

2 code implementations12 May 2022 Matthew Jagielski, Stanley Wu, Alina Oprea, Jonathan Ullman, Roxana Geambasu

Our results on four public datasets show that our attacks are effective at using update information to give the adversary a significant advantage over attacks on standalone models, but also compared to a prior MI attack that takes advantage of model updates in a related machine-unlearning setting.

Machine Unlearning

Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning

no code implementations4 May 2022 Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Sebastiano Vascon, Werner Zellinger, Bernhard A. Moser, Alina Oprea, Battista Biggio, Marcello Pelillo, Fabio Roli

In this survey, we provide a comprehensive systematization of poisoning attacks and defenses in machine learning, reviewing more than 100 papers published in the field in the last 15 years.

BIG-bench Machine Learning Data Poisoning

Adversarial Robustness Verification and Attack Synthesis in Stochastic Systems

1 code implementation5 Oct 2021 Lisa Oakley, Alina Oprea, Stavros Tripakis

We outline a class of threat models under which adversaries can perturb system transitions, constrained by an $\varepsilon$ ball around the original transition probabilities.

Adversarial Robustness

Extracting Training Data from Large Language Models

3 code implementations14 Dec 2020 Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, Colin Raffel

We demonstrate our attack on GPT-2, a language model trained on scrapes of the public Internet, and are able to extract hundreds of verbatim text sequences from the model's training data.

Language Modelling

Subpopulation Data Poisoning Attacks

1 code implementation24 Jun 2020 Matthew Jagielski, Giorgio Severi, Niklas Pousette Harger, Alina Oprea

Poisoning attacks against machine learning induce adversarial modification of data used by a machine learning algorithm to selectively change its output when it is deployed.

BIG-bench Machine Learning Data Poisoning

Auditing Differentially Private Machine Learning: How Private is Private SGD?

1 code implementation NeurIPS 2020 Matthew Jagielski, Jonathan Ullman, Alina Oprea

We investigate whether Differentially Private SGD offers better privacy in practice than what is guaranteed by its state-of-the-art analysis.

Art Analysis BIG-bench Machine Learning +1

Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers

2 code implementations2 Mar 2020 Giorgio Severi, Jim Meyer, Scott Coull, Alina Oprea

Training pipelines for machine learning (ML) based malware classification often rely on crowdsourced threat feeds, exposing a natural attack injection point.

BIG-bench Machine Learning General Classification +1

FENCE: Feasible Evasion Attacks on Neural Networks in Constrained Environments

1 code implementation23 Sep 2019 Alesia Chernikova, Alina Oprea

Finally, we demonstrate the potential of performing adversarial training in constrained domains to increase the model resilience against these evasion attacks.

domain classification Feature Engineering +1

AppMine: Behavioral Analytics for Web Application Vulnerability Detection

no code implementations6 Aug 2019 Indranil Jana, Alina Oprea

Web applications in widespread use have always been the target of large-scale attacks, leading to massive disruption of services and financial loss, as in the Equifax data breach.

Cryptography and Security

On Designing Machine Learning Models for Malicious Network Traffic Classification

no code implementations10 Jul 2019 Talha Ongun, Timothy Sakharaov, Simona Boboila, Alina Oprea, Tina Eliassi-Rad

Machine learning (ML) started to become widely deployed in cyber security settings for shortening the detection cycle of cyber attacks.

BIG-bench Machine Learning Classification +2

QFlip: An Adaptive Reinforcement Learning Strategy for the FlipIt Security Game

2 code implementations27 Jun 2019 Lisa Oakley, Alina Oprea

FlipIt is a security game that models attacker-defender interactions in advanced scenarios such as APTs.

OpenAI Gym Q-Learning +2

Private Hierarchical Clustering and Efficient Approximation

no code implementations9 Apr 2019 Xianrui Meng, Dimitrios Papadopoulos, Alina Oprea, Nikos Triandopoulos

In collaborative learning, multiple parties contribute their datasets to jointly deduce global machine learning models for numerous predictive tasks.

Clustering Privacy Preserving

Differentially Private Fair Learning

no code implementations6 Dec 2018 Matthew Jagielski, Michael Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, Jonathan Ullman

This algorithm is appealingly simple, but must be able to use protected group membership explicitly at test time, which can be viewed as a form of 'disparate treatment'.

Attribute Fairness

Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks

no code implementations8 Sep 2018 Ambra Demontis, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, Fabio Roli

Transferability captures the ability of an attack against a machine-learning model to be effective against a different, potentially unknown, model.

Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning

1 code implementation1 Apr 2018 Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, Bo Li

As machine learning becomes widely used for automated decisions, attackers have strong incentives to manipulate the results and models generated by machine learning algorithms.

BIG-bench Machine Learning regression

Robust High-Dimensional Linear Regression

no code implementations7 Aug 2016 Chang Liu, Bo Li, Yevgeniy Vorobeychik, Alina Oprea

The effectiveness of supervised learning techniques has made them ubiquitous in research and practice.

Dimensionality Reduction regression +1

Cannot find the paper you are looking for? You can Submit a new open access paper.