Search Results for author: Cristina Nita-Rotaru

Found 11 papers, 5 papers with code

SureFED: Robust Federated Learning via Uncertainty-Aware Inward and Outward Inspection

no code implementations4 Aug 2023 Nasimeh Heydaribeni, Ruisi Zhang, Tara Javidi, Cristina Nita-Rotaru, Farinaz Koushanfar

We theoretically prove the robustness of our algorithm against data and model poisoning attacks in a decentralized linear regression setting.

Federated Learning Image Classification +1

Runtime Stealthy Perception Attacks against DNN-based Adaptive Cruise Control Systems

no code implementations18 Jul 2023 Xugui Zhou, Anqi Chen, Maxfield Kouzel, Haotian Ren, Morgan McCarty, Cristina Nita-Rotaru, Homa Alemzadeh

Adaptive Cruise Control (ACC) is a widely used driver assistance technology for maintaining the desired speed and safe distance to the leading vehicle.

Backdoor Attacks in Peer-to-Peer Federated Learning

no code implementations23 Jan 2023 Gokberk Yar, Simona Boboila, Cristina Nita-Rotaru, Alina Oprea

Most machine learning applications rely on centralized learning processes, opening up the risk of exposure of their training datasets.

Backdoor Attack Federated Learning

Network-Level Adversaries in Federated Learning

1 code implementation27 Aug 2022 Giorgio Severi, Matthew Jagielski, Gökberk Yar, Yuxuan Wang, Alina Oprea, Cristina Nita-Rotaru

Federated learning is a popular strategy for training models on distributed, sensitive data, while preserving data privacy.

Federated Learning

Automated Attacker Synthesis for Distributed Protocols

3 code implementations2 Apr 2020 Max von Hippel, Cole Vick, Stavros Tripakis, Cristina Nita-Rotaru

Distributed protocols should be robust to both benign malfunction (e. g. packet loss or delay) and attacks (e. g. message replay) from internal or external adversaries.

Cryptography and Security Formal Languages and Automata Theory

Leveraging Textual Specifications for Grammar-based Fuzzing of Network Protocols

no code implementations10 Oct 2018 Samuel Jero, Maria Leonor Pacheco, Dan Goldwasser, Cristina Nita-Rotaru

Grammar-based fuzzing is a technique used to find software vulnerabilities by injecting well-formed inputs generated following rules that encode application semantics.

Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks

no code implementations8 Sep 2018 Ambra Demontis, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, Fabio Roli

Transferability captures the ability of an attack against a machine-learning model to be effective against a different, potentially unknown, model.

Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning

1 code implementation1 Apr 2018 Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, Bo Li

As machine learning becomes widely used for automated decisions, attackers have strong incentives to manipulate the results and models generated by machine learning algorithms.

BIG-bench Machine Learning regression

Cannot find the paper you are looking for? You can Submit a new open access paper.